Core characterization of the new CABRI Water Loop Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritter, G.; Rodiac, F.; Beretz, D.
2011-07-01
The CABRI experimental reactor is located at the Cadarache nuclear research center, southern France. It is operated by the Atomic Energy Commission (CEA) and devoted to IRSN (Institut de Radioprotection et de Surete Nucleaire) safety programmes. It has been successfully operated during the last 30 years, enlightening the knowledge of FBR and LWR fuel behaviour during Reactivity Insertion Accident (RIA) and Loss Of Coolant Accident (LOCA) transients in the frame of IPSN (Institut de Protection et de Surete Nucleaire) and now IRSN programmes devoted to reactor safety. This operation was interrupted in 2003 to allow for a whole facility renewalmore » programme for the need of the CABRI International Programme (CIP) carried out by IRSN under the OECD umbrella. The principle of operation of the facility is based on the control of {sup 3}He, a major gaseous neutron absorber, in the core geometry. The purpose of this paper is to illustrate how several dosimetric devices have been set up to better characterize the core during the upcoming commissioning campaign. It presents the schemes and tools dedicated to core characterization. (authors)« less
Report on Wind Turbine Subsystem Reliability - A Survey of Various Databases (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, S.
2013-07-01
Wind industry has been challenged by premature subsystem/component failures. Various reliability data collection efforts have demonstrated their values in supporting wind turbine reliability and availability research & development and industrial activities. However, most information on these data collection efforts are scattered and not in a centralized place. With the objective of getting updated reliability statistics of wind turbines and/or subsystems so as to benefit future wind reliability and availability activities, this report is put together based on a survey of various reliability databases that are accessible directly or indirectly by NREL. For each database, whenever feasible, a brief description summarizingmore » database population, life span, and data collected is given along with its features & status. Then selective results deemed beneficial to the industry and generated based on the database are highlighted. This report concludes with several observations obtained throughout the survey and several reliability data collection opportunities in the future.« less
Creation of the NaSCoRD Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William
This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less
Some Reliability Issues in Very Large Databases.
ERIC Educational Resources Information Center
Lynch, Clifford A.
1988-01-01
Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…
2009-01-01
events. In an ideal situation, sample transportation will be in com- pliance with the UN sample shipment regulations (32); de - tailed information will...Berard, F. Briot, P. Boisson, D. Cavadore, C. Challeton- de Vathaire, S. Distinguin and A. Miele , Dosimetric management during a criticality accident. Nucl...Institute of Radiological Sciences (NIRS), Chiba, Japan; j Institut de Radioprotection et Sureté Nucléaire, Fontenay-Aux-Roses, France; and k Consumer
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
NASA Astrophysics Data System (ADS)
Bonnet, José; Fradet, Thibault; Traversa, Paola; Tuleau-Malot, Christine; Reynaud-Bouret, Patricia; Laloe, Thomas; Manchuel, Kevin
2014-05-01
In metropolitan France the deformation rates are slow, implying low to moderate seismic activity. Therefore, earthquakes observed during the instrumental period (since 1962), and associated catalogs, cannot be representative of the seismic cycle for the French metropolitan territory. In such context it is necessary, when performing seismic hazard studies, to consider historical seismic data in order to extend the observation period and to be more representative of the seismogenic behavior of geological structures. The French macroseismic database SisFrance is jointly developed by EDF (Electricité de France), BRGM (Bureau de Recherche Géologique et Minière) and IRSN (Institut de Radioprotection et Sureté Nucléaire). It contains more than 6,000 events inventoried between 217 BC and 2007 and more than 100,000 macroseismic observations. SisFrance is the reference macroseismic database for metropolitan France. The aim of this study is to determine, over the whole catalog, the completeness periods for different epicentral intensity (Iepc) classes≥IV. Two methods have been used: 1) the method of Albarello et al. [2001], which has been adapted to best suit the French catalog, and 2) a mathematical method based on change points estimation, proposed by Muggeo et al. [2003], which has been adapted to the analysis of seismic datasets. After a brief theoretical description, both methods are tested and validated using synthetic catalogs, before being applied to the French catalog. The results show that completeness periods estimated using these two methods are coherent with each other for events with Iepc ≥IV (1876 using Albarello et al. [2001] method and 1872 using Muggeo et al. [2003] method) and events with Iepc ≥V (1852 using Albarello et al. [2001] method and 1855 using Muggeo et al. [2003] method). Larger differences in estimated completeness period appear when considering events with Iepc ≥VI (around 30 years difference) and events with Iepc ≥VII (around 50 years difference). These could be explained (1) by the differences in the way each method approaches the data; Muggeo et al. [2003] method estimates all change points within data series, whereas the method of Albarello et al. [2001] focus on the last one, and (2) by a more limited number of data for these epicentral intensity classes (2056 events with Iepc ≥IV and 1252 events with Iepc ≥V vs. 486 events with Iepc ≥VI and 199 events with Iepc ≥VII). Results obtained for epicentral intensity classes greater than VIII are considered not reliable due to the short number of existing data (around 30 events). The completeness periods determined in this study are discussed in the light of their contemporary historical context, and in particular of the evolution of the information available from historical archives since the 17th century.
Space flight risk data collection and analysis project: Risk and reliability database
NASA Technical Reports Server (NTRS)
1994-01-01
The focus of the NASA 'Space Flight Risk Data Collection and Analysis' project was to acquire and evaluate space flight data with the express purpose of establishing a database containing measurements of specific risk assessment - reliability - availability - maintainability - supportability (RRAMS) parameters. The developed comprehensive RRAMS database will support the performance of future NASA and aerospace industry risk and reliability studies. One of the primary goals has been to acquire unprocessed information relating to the reliability and availability of launch vehicles and the subsystems and components thereof from the 45th Space Wing (formerly Eastern Space and Missile Command -ESMC) at Patrick Air Force Base. After evaluating and analyzing this information, it was encoded in terms of parameters pertinent to ascertaining reliability and availability statistics, and then assembled into an appropriate database structure.
Wind turbine reliability database update.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.
2009-03-01
This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of a reliability database (RDB) methodology to determine applicable reliability data for inclusion in the quantification of the PRA. The RDBmore » method developed during this project seeks to satisfy the requirements of the Data Analysis element of the ASME/ANS Non-LWR PRA standard. The RDB methodology utilizes a relevancy test to examine reliability data and determine whether it is appropriate to include as part of the reliability database for the PRA. The relevancy test compares three component properties to establish the level of similarity to components examined as part of the PRA. These properties include the component function, the component failure modes, and the environment/boundary conditions of the component. The relevancy test is used to gauge the quality of data found in a variety of sources, such as advanced reactor-specific databases, non-advanced reactor nuclear databases, and non-nuclear databases. The RDB also establishes the integration of expert judgment or separate reliability analysis with past reliability data. This paper provides details on the RDB methodology, and includes an example application of the RDB methodology for determining the reliability of the intermediate heat exchanger of a sodium fast reactor. The example explores a variety of reliability data sources, and assesses their applicability for the PRA of interest through the use of the relevancy test.« less
Li, Honglan; Joh, Yoon Sung; Kim, Hyunwoo; Paek, Eunok; Lee, Sang-Won; Hwang, Kyu-Baek
2016-12-22
Proteogenomics is a promising approach for various tasks ranging from gene annotation to cancer research. Databases for proteogenomic searches are often constructed by adding peptide sequences inferred from genomic or transcriptomic evidence to reference protein sequences. Such inflation of databases has potential of identifying novel peptides. However, it also raises concerns on sensitive and reliable peptide identification. Spurious peptides included in target databases may result in underestimated false discovery rate (FDR). On the other hand, inflation of decoy databases could decrease the sensitivity of peptide identification due to the increased number of high-scoring random hits. Although several studies have addressed these issues, widely applicable guidelines for sensitive and reliable proteogenomic search have hardly been available. To systematically evaluate the effect of database inflation in proteogenomic searches, we constructed a variety of real and simulated proteogenomic databases for yeast and human tandem mass spectrometry (MS/MS) data, respectively. Against these databases, we tested two popular database search tools with various approaches to search result validation: the target-decoy search strategy (with and without a refined scoring-metric) and a mixture model-based method. The effect of separate filtering of known and novel peptides was also examined. The results from real and simulated proteogenomic searches confirmed that separate filtering increases the sensitivity and reliability in proteogenomic search. However, no one method consistently identified the largest (or the smallest) number of novel peptides from real proteogenomic searches. We propose to use a set of search result validation methods with separate filtering, for sensitive and reliable identification of peptides in proteogenomic search.
2010-12-01
la Reine (en droit du Canada), telle que représentée par le ministre de la ...scénarios sont essentiels pratiquement pour tous les aspects de la Sureté et de la sécurité publique et de la gestion des urgences. Tous les utilisateurs...création de scénarios de planification du Programme technique de sécurité publique (PTSP) soumis vise à aider une gamme d’intervenants, à la fois
ERIC Educational Resources Information Center
Fitzgibbons, Megan; Meert, Deborah
2010-01-01
The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…
Reliability of the Defense Commissary Agency Personnel Property Database.
2000-02-18
Departments’ personal property databases. The tests were designed to validate the personal property databases. This report is the second in a series of...with the completeness of its data , and key data elements were not reliable for estimating the historical costs of real property for the Military...values of greater than $100,000. However, some of the Military Departments had problems with the completeness of its data , and key data elements
López, Yosvany; Nakai, Kenta; Patil, Ashwini
2015-01-01
HitPredict is a consolidated resource of experimentally identified, physical protein-protein interactions with confidence scores to indicate their reliability. The study of genes and their inter-relationships using methods such as network and pathway analysis requires high quality protein-protein interaction information. Extracting reliable interactions from most of the existing databases is challenging because they either contain only a subset of the available interactions, or a mixture of physical, genetic and predicted interactions. Automated integration of interactions is further complicated by varying levels of accuracy of database content and lack of adherence to standard formats. To address these issues, the latest version of HitPredict provides a manually curated dataset of 398 696 physical associations between 70 808 proteins from 105 species. Manual confirmation was used to resolve all issues encountered during data integration. For improved reliability assessment, this version combines a new score derived from the experimental information of the interactions with the original score based on the features of the interacting proteins. The combined interaction score performs better than either of the individual scores in HitPredict as well as the reliability score of another similar database. HitPredict provides a web interface to search proteins and visualize their interactions, and the data can be downloaded for offline analysis. Data usability has been enhanced by mapping protein identifiers across multiple reference databases. Thus, the latest version of HitPredict provides a significantly larger, more reliable and usable dataset of protein-protein interactions from several species for the study of gene groups. Database URL: http://hintdb.hgc.jp/htp. © The Author(s) 2015. Published by Oxford University Press.
Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M
2018-06-01
Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
Rhinoplasty perioperative database using a personal digital assistant.
Kotler, Howard S
2004-01-01
To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.
Mass and Reliability Source (MaRS) Database
NASA Technical Reports Server (NTRS)
Valdenegro, Wladimir
2017-01-01
The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.
First year progress report on the development of the Texas flexible pavement database.
DOT National Transportation Integrated Search
2008-01-01
Comprehensive and reliable databases are essential for the development, validation, and calibration of any pavement : design and rehabilitation system. These databases should include material properties, pavement structural : characteristics, highway...
NASA Technical Reports Server (NTRS)
Singh, M.
1999-01-01
Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.
Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey
2017-01-01
The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...
Corbellini, Carlo; Andreoni, Bruno; Ansaloni, Luca; Sgroi, Giovanni; Martinotti, Mario; Scandroglio, Ildo; Carzaniga, Pierluigi; Longoni, Mauro; Foschi, Diego; Dionigi, Paolo; Morandi, Eugenio; Agnello, Mauro
2018-01-01
Measurement and monitoring of the quality of care using a core set of quality measures are increasing in health service research. Although administrative databases include limited clinical data, they offer an attractive source for quality measurement. The purpose of this study, therefore, was to evaluate the completeness of different administrative data sources compared to a clinical survey in evaluating rectal cancer cases. Between May 2012 and November 2014, a clinical survey was done on 498 Lombardy patients who had rectal cancer and underwent surgical resection. These collected data were compared with the information extracted from administrative sources including Hospital Discharge Dataset, drug database, daycare activity data, fee-exemption database, and regional screening program database. The agreement evaluation was performed using a set of 12 quality indicators. Patient complexity was a difficult indicator to measure for lack of clinical data. Preoperative staging was another suboptimal indicator due to the frequent missing administrative registration of tests performed. The agreement between the 2 data sources regarding chemoradiotherapy treatments was high. Screening detection, minimally invasive techniques, length of stay, and unpreventable readmissions were detected as reliable quality indicators. Postoperative morbidity could be a useful indicator but its agreement was lower, as expected. Healthcare administrative databases are large and real-time collected repositories of data useful in measuring quality in a healthcare system. Our investigation reveals that the reliability of indicators varies between them. Ideally, a combination of data from both sources could be used in order to improve usefulness of less reliable indicators.
Dynamic modeling of the cesium, strontium, and ruthenium transfer to grass and vegetables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, P.; Real, J.; Maubert, H.
1999-05-01
From 1988 to 1993, the Nuclear Safety and Protection Institute (Institut de Protection et de Surete Nucleaire -- IPSN) conducted experimental programs focused on transfers to vegetation following accidental localized deposits of radioactive aerosols. In relation to vegetable crops (fruit, leaves, and root vegetables) and meadow grass these experiments have enabled a determination of the factors involved in the transfer of cesium, strontium, and ruthenium at successive harvests, or cuttings, in respect of various time lags after contamination. The dynamic modeling given by these results allows an evaluation of changes in the mass activity of vegetables and grass during themore » months following deposit. It constitutes part of the ASTRAL post-accident radioecology model.« less
Development and application of basis database for materials life cycle assessment in china
NASA Astrophysics Data System (ADS)
Li, Xiaoqing; Gong, Xianzheng; Liu, Yu
2017-03-01
As the data intensive method, high quality environmental burden data is an important premise of carrying out materials life cycle assessment (MLCA), and the reliability of data directly influences the reliability of the assessment results and its application performance. Therefore, building Chinese MLCA database is the basic data needs and technical supports for carrying out and improving LCA practice. Firstly, some new progress on database which related to materials life cycle assessment research and development are introduced. Secondly, according to requirement of ISO 14040 series standards, the database framework and main datasets of the materials life cycle assessment are studied. Thirdly, MLCA data platform based on big data is developed. Finally, the future research works were proposed and discussed.
Monitoring outcomes with relational databases: does it improve quality of care?
Clemmer, Terry P
2004-12-01
There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.
GEOMAGIA50: An archeointensity database with PHP and MySQL
NASA Astrophysics Data System (ADS)
Korhonen, K.; Donadini, F.; Riisager, P.; Pesonen, L. J.
2008-04-01
The GEOMAGIA50 database stores 3798 archeomagnetic and paleomagnetic intensity determinations dated to the past 50,000 years. It also stores details of the measurement setup for each determination, which are used for ranking the data according to prescribed reliability criteria. The ranking system aims to alleviate the data reliability problem inherent in this kind of data. GEOMAGIA50 is based on two popular open source technologies. The MySQL database management system is used for storing the data, whereas the functionality and user interface are provided by server-side PHP scripts. This technical brief gives a detailed description of GEOMAGIA50 from a technical viewpoint.
An Evaluation of Online Business Databases.
ERIC Educational Resources Information Center
van der Heyde, Angela J.
The purpose of this study was to evaluate the credibility and timeliness of online business databases. The areas of evaluation were the currency, reliability, and extent of financial information in the databases. These were measured by performing an online search for financial information on five U.S. companies. The method of selection for the…
Bridging the Gap between the Data Base and User in a Distributed Environment.
ERIC Educational Resources Information Center
Howard, Richard D.; And Others
1989-01-01
The distribution of databases physically separates users from those who administer the database and the administrators who perform database administration. By drawing on the work of social scientists in reliability and validity, a set of concepts and a list of questions to ensure data quality were developed. (Author/MLW)
Network-based statistical comparison of citation topology of bibliographic databases
Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko
2014-01-01
Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231
Expediting topology data gathering for the TOPDB database.
Dobson, László; Langó, Tamás; Reményi, István; Tusnády, Gábor E
2015-01-01
The Topology Data Bank of Transmembrane Proteins (TOPDB, http://topdb.enzim.ttk.mta.hu) contains experimentally determined topology data of transmembrane proteins. Recently, we have updated TOPDB from several sources and utilized a newly developed topology prediction algorithm to determine the most reliable topology using the results of experiments as constraints. In addition to collecting the experimentally determined topology data published in the last couple of years, we gathered topographies defined by the TMDET algorithm using 3D structures from the PDBTM. Results of global topology analysis of various organisms as well as topology data generated by high throughput techniques, like the sequential positions of N- or O-glycosylations were incorporated into the TOPDB database. Moreover, a new algorithm was developed to integrate scattered topology data from various publicly available databases and a new method was introduced to measure the reliability of predicted topologies. We show that reliability values highly correlate with the per protein topology accuracy of the utilized prediction method. Altogether, more than 52,000 new topology data and more than 2600 new transmembrane proteins have been collected since the last public release of the TOPDB database. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
VerSeDa: vertebrate secretome database
Cortazar, Ana R.; Oguiza, José A.
2017-01-01
Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. Database URL: VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php PMID:28365718
Drozdovitch, Vladimir; Zhukova, Olga; Germenchuk, Maria; Khrutchinsky, Arkady; Kukhta, Tatiana; Luckyanov, Nickolas; Minenko, Victor; Podgaiskaya, Marina; Savkin, Mikhail; Vakulovsky, Sergey; Voillequé, Paul; Bouville, André
2012-01-01
Results of all available meteorological and radiation measurements that were performed in Belarus during the first three months after the Chernobyl accident were collected from various sources and incorporated into a single database. Meteorological information such as precipitation, wind speed and direction, and temperature in localities were obtained from meteorological station facilities. Radiation measurements include gamma-exposure rate in air, daily fallout, concentration of different radionuclides in soil, grass, cow’s milk and water as well as total beta-activity in cow’s milk. Considerable efforts were made to evaluate the reliability of the measurements that were collected. The electronic database can be searched according to type of measurement, date, and location. The main purpose of the database is to provide reliable data that can be used in the reconstruction of thyroid doses resulting from the Chernobyl accident. PMID:23103580
Patterson, P Daniel; Weaver, Matthew D; Fabio, Anthony; Teasley, Ellen M; Renn, Megan L; Curtis, Brett R; Matthews, Margaret E; Kroemer, Andrew J; Xun, Xiaoshuang; Bizhanova, Zhadyra; Weiss, Patricia M; Sequeira, Denisse J; Coppler, Patrick J; Lang, Eddy S; Higgins, J Stephen
2018-02-15
This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. A systematic review study design was used and searched six databases, including one website. The research question guiding the search was developed a priori and registered with the PROSPERO database of systematic reviews: "Are there reliable and valid instruments for measuring fatigue among EMS personnel?" (2016:CRD42016040097). The primary outcome of interest was criterion-related validity. Important outcomes of interest included reliability (e.g., internal consistency), and indicators of sensitivity and specificity. Members of the research team independently screened records from the databases. Full-text articles were evaluated by adapting the Bolster and Rourke system for categorizing findings of systematic reviews, and the rated data abstracted from the body of literature as favorable, unfavorable, mixed/inconclusive, or no impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology was used to evaluate the quality of evidence. The search strategy yielded 1,257 unique records. Thirty-four unique experimental and non-experimental studies were determined relevant following full-text review. Nineteen studies reported on the reliability and/or validity of ten different fatigue survey instruments. Eighteen different studies evaluated the reliability and/or validity of four different sleepiness survey instruments. None of the retained studies reported sensitivity or specificity. Evidence quality was rated as very low across all outcomes. In this systematic review, limited evidence of the reliability and validity of 14 different survey instruments to assess the fatigue and/or sleepiness status of EMS personnel and related shift worker groups was identified.
Experiences with Two Reliability Data Collection Efforts (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, S.; Lantz, E.
2013-08-01
This presentation, given by NREL at the Wind Reliability Experts Meeting in Albuquerque, New Mexico, outlines the causes of wind plant operational expenditures and gearbox failures and describes NREL's efforts to create a gearbox failure database.
NASA Astrophysics Data System (ADS)
García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João
2017-08-01
Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.
A Note on the Score Reliability for the Satisfaction with Life Scale: An RG Study
ERIC Educational Resources Information Center
Vassar, Matt
2008-01-01
The purpose of the present study was to meta-analytically investigate the score reliability for the Satisfaction With Life Scale. Four-hundred and sixteen articles using the measure were located through electronic database searches and then separated to identify studies which had calculated reliability estimates from their own data. Sixty-two…
High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications
NASA Technical Reports Server (NTRS)
Callahan, John; Montgomery, Todd; Whetten, Brian
1997-01-01
Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.
How to: identify non-tuberculous Mycobacterium species using MALDI-TOF mass spectrometry.
Alcaide, F; Amlerová, J; Bou, G; Ceyssens, P J; Coll, P; Corcoran, D; Fangous, M-S; González-Álvarez, I; Gorton, R; Greub, G; Hery-Arnaud, G; Hrábak, J; Ingebretsen, A; Lucey, B; Marekoviċ, I; Mediavilla-Gradolph, C; Monté, M R; O'Connor, J; O'Mahony, J; Opota, O; O'Reilly, B; Orth-Höller, D; Oviaño, M; Palacios, J J; Palop, B; Pranada, A B; Quiroga, L; Rodríguez-Temporal, D; Ruiz-Serrano, M J; Tudó, G; Van den Bossche, A; van Ingen, J; Rodriguez-Sanchez, B
2018-06-01
The implementation of MALDI-TOF MS for microorganism identification has changed the routine of the microbiology laboratories as we knew it. Most microorganisms can now be reliably identified within minutes using this inexpensive, user-friendly methodology. However, its application in the identification of mycobacteria isolates has been hampered by the structure of their cell wall. Improvements in the sample processing method and in the available database have proved key factors for the rapid and reliable identification of non-tuberculous mycobacteria isolates using MALDI-TOF MS. The main objective is to provide information about the proceedings for the identification of non-tuberculous isolates using MALDI-TOF MS and to review different sample processing methods, available databases, and the interpretation of the results. Results from relevant studies on the use of the available MALDI-TOF MS instruments, the implementation of innovative sample processing methods, or the implementation of improved databases are discussed. Insight about the methodology required for reliable identification of non-tuberculous mycobacteria and its implementation in the microbiology laboratory routine is provided. Microbiology laboratories where MALDI-TOF MS is available can benefit from its capacity to identify most clinically interesting non-tuberculous mycobacteria in a rapid, reliable, and inexpensive manner. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R
2003-01-01
Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545
The HISTMAG database: combining historical, archaeomagnetic and volcanic data
NASA Astrophysics Data System (ADS)
Arneitz, Patrick; Leonhardt, Roman; Schnepp, Elisabeth; Heilig, Balázs; Mayrhofer, Franziska; Kovacs, Peter; Hejda, Pavel; Valach, Fridrich; Vadasz, Gergely; Hammerl, Christa; Egli, Ramon; Fabian, Karl; Kompein, Niko
2017-09-01
Records of the past geomagnetic field can be divided into two main categories. These are instrumental historical observations on the one hand, and field estimates based on the magnetization acquired by rocks, sediments and archaeological artefacts on the other hand. In this paper, a new database combining historical, archaeomagnetic and volcanic records is presented. HISTMAG is a relational database, implemented in MySQL, and can be accessed via a web-based interface (http://www.conrad-observatory.at/zamg/index.php/data-en/histmag-database). It combines available global historical data compilations covering the last ∼500 yr as well as archaeomagnetic and volcanic data collections from the last 50 000 yr. Furthermore, new historical and archaeomagnetic records, mainly from central Europe, have been acquired. In total, 190 427 records are currently available in the HISTMAG database, whereby the majority is related to historical declination measurements (155 525). The original database structure was complemented by new fields, which allow for a detailed description of the different data types. A user-comment function provides the possibility for a scientific discussion about individual records. Therefore, HISTMAG database supports thorough reliability and uncertainty assessments of the widely different data sets, which are an essential basis for geomagnetic field reconstructions. A database analysis revealed systematic offset for declination records derived from compass roses on historical geographical maps through comparison with other historical records, while maps created for mining activities represent a reliable source.
HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan S.; Frew, David J.
2016-07-01
By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.
Patel, Vanash M.; Ashrafian, Hutan; Almoudaris, Alex; Makanjuola, Jonathan; Bucciarelli-Ducci, Chiara; Darzi, Ara; Athanasiou, Thanos
2013-01-01
Objectives To compare H index scores for healthcare researchers returned by Google Scholar, Web of Science and Scopus databases, and to assess whether a researcher's age, country of institutional affiliation and physician status influences calculations. Subjects and Methods One hundred and ninety-five Nobel laureates in Physiology and Medicine from 1901 to 2009 were considered. Year of first and last publications, total publications and citation counts, and the H index for each laureate were calculated from each database. Cronbach's alpha statistics was used to measure the reliability of H index scores between the databases. Laureate characteristic influence on the H index was analysed using linear regression. Results There was no concordance between the databases when considering the number of publications and citations count per laureate. The H index was the most reliably calculated bibliometric across the three databases (Cronbach's alpha = 0.900). All databases returned significantly higher H index scores for younger laureates (p < 0.0001). Google Scholar and Web of Science returned significantly higher H index for physician laureates (p = 0.025 and p = 0.029, respectively). Country of institutional affiliation did not influence the H index in any database. Conclusion The H index appeared to be the most consistently calculated bibliometric between the databases for Nobel laureates in Physiology and Medicine. Researcher-specific characteristics constituted an important component of objective research assessment. The findings of this study call to question the choice of current and future academic performance databases. PMID:22964880
NASA Technical Reports Server (NTRS)
Aruljothi, Arunvenkatesh
2016-01-01
The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.
VerSeDa: vertebrate secretome database.
Cortazar, Ana R; Oguiza, José A; Aransay, Ana M; Lavín, José L
2017-01-01
Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php. © The Author(s) 2017. Published by Oxford University Press.
Frost, Rachael; Levati, Sara; McClurg, Doreen; Brady, Marian; Williams, Brian
2017-06-01
To systematically review methods for measuring adherence used in home-based rehabilitation trials and to evaluate their validity, reliability, and acceptability. In phase 1 we searched the CENTRAL database, NHS Economic Evaluation Database, and Health Technology Assessment Database (January 2000 to April 2013) to identify adherence measures used in randomized controlled trials of allied health professional home-based rehabilitation interventions. In phase 2 we searched the databases of MEDLINE, Embase, CINAHL, Allied and Complementary Medicine Database, PsycINFO, CENTRAL, ProQuest Nursing and Allied Health, and Web of Science (inception to April 2015) for measurement property assessments for each measure. Studies assessing the validity, reliability, or acceptability of adherence measures. Two reviewers independently extracted data on participant and measure characteristics, measurement properties evaluated, evaluation methods, and outcome statistics and assessed study quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist. In phase 1 we included 8 adherence measures (56 trials). In phase 2, from the 222 measurement property assessments identified in 109 studies, 22 high-quality measurement property assessments were narratively synthesized. Low-quality studies were used as supporting data. StepWatch Activity Monitor validly and acceptably measured short-term step count adherence. The Problematic Experiences of Therapy Scale validly and reliably assessed adherence to vestibular rehabilitation exercises. Adherence diaries had moderately high validity and acceptability across limited populations. The Borg 6 to 20 scale, Bassett and Prapavessis scale, and Yamax CW series had insufficient validity. Low-quality evidence supported use of the Joint Protection Behaviour Assessment. Polar A1 series heart monitors were considered acceptable by 1 study. Current rehabilitation adherence measures are limited. Some possess promising validity and acceptability for certain parameters of adherence, situations, and populations and should be used in these situations. Rigorous evaluation of adherence measures in a broader range of populations is needed. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Lindsley, Kristina; Li, Tianjing; Ssemanda, Elizabeth; Virgili, Gianni; Dickersin, Kay
2016-04-01
Are existing systematic reviews of interventions for age-related macular degeneration incorporated into clinical practice guidelines? High-quality systematic reviews should be used to underpin evidence-based clinical practice guidelines and clinical care. We examined the reliability of systematic reviews of interventions for age-related macular degeneration (AMD) and described the main findings of reliable reviews in relation to clinical practice guidelines. Eligible publications were systematic reviews of the effectiveness of treatment interventions for AMD. We searched a database of systematic reviews in eyes and vision without language or date restrictions; the database was up to date as of May 6, 2014. Two authors independently screened records for eligibility and abstracted and assessed the characteristics and methods of each review. We classified reviews as reliable when they reported eligibility criteria, comprehensive searches, methodologic quality of included studies, appropriate statistical methods for meta-analysis, and conclusions based on results. We mapped treatment recommendations from the American Academy of Ophthalmology (AAO) Preferred Practice Patterns (PPPs) for AMD to systematic reviews and citations of reliable systematic reviews to support each treatment recommendation. Of 1570 systematic reviews in our database, 47 met inclusion criteria; most targeted neovascular AMD and investigated anti-vascular endothelial growth factor (VEGF) interventions, dietary supplements, or photodynamic therapy. We classified 33 (70%) reviews as reliable. The quality of reporting varied, with criteria for reliable reporting met more often by Cochrane reviews and reviews whose authors disclosed conflicts of interest. Anti-VEGF agents and photodynamic therapy were the only interventions identified as effective by reliable reviews. Of 35 treatment recommendations extracted from the PPPs, 15 could have been supported with reliable systematic reviews; however, only 1 recommendation cited a reliable intervention systematic review. No reliable systematic review was identified for 20 treatment recommendations, highlighting areas of evidence gaps. For AMD, reliable systematic reviews exist for many treatment recommendations in the AAO PPPs and should be cited to support these recommendations. We also identified areas where no high-level evidence exists. Mapping clinical practice guidelines to existing systematic reviews is one way to highlight areas where evidence generation or evidence synthesis is either available or needed. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Mass and Reliability System (MaRS)
NASA Technical Reports Server (NTRS)
Barnes, Sarah
2016-01-01
The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions
NASA Astrophysics Data System (ADS)
Haddam, N. A.; Michel, E.; Siani, G.; Cortese, G.; Bostock, H. C.; Duprat, J. M.; Isguder, G.
2016-06-01
We present an improved database of planktonic foraminiferal census counts from the Southern Hemisphere oceans (SHO) from 15°S to 64°S. The SHO database combines three existing databases. Using this SHO database, we investigated dissolution biases that might affect faunal census counts. We suggest a depth/ΔCO32- threshold of ~3800 m/ΔCO32- = ~ -10 to -5 µmol/kg for the Pacific and Indian Oceans and ~4000 m/ΔCO32- = ~0 to 10 µmol/kg for the Atlantic Ocean, under which core-top assemblages can be affected by dissolution and are less reliable for paleo-sea surface temperature (SST) reconstructions. We removed all core tops beyond these thresholds from the SHO database. This database has 598 core tops and is able to reconstruct past SST variations from 2° to 25.5°C, with a root mean square error of 1.00°C, for annual temperatures. To inspect how dissolution affects SST reconstruction quality, we tested the data base with two "leave-one-out" tests, with and without the deep core tops. We used this database to reconstruct summer SST (SSST) over the last 20 ka, using the Modern Analog Technique method, on the Southeast Pacific core MD07-3100. This was compared to the SSST reconstructed using the three databases used to compile the SHO database, thus showing that the reconstruction using the SHO database is more reliable, as its dissimilarity values are the lowest. The most important aspect here is the importance of a bias-free, geographic-rich database. We leave this data set open-ended to future additions; the new core tops must be carefully selected, with their chronological frameworks, and evidence of dissolution assessed.
A Framework For Fault Tolerance In Virtualized Servers
2016-06-01
effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data
A major challenge for scientists and regulators is accounting for the metabolic activation of chemicals that may lead to increased toxicity. Reliable forecasting of chemical metabolism is a critical factor in estimating a chemical’s toxic potential. Research is underway to develo...
Separation and confirmation of showers
NASA Astrophysics Data System (ADS)
Neslušan, L.; Hajduková, M.
2017-02-01
Aims: Using IAU MDC photographic, IAU MDC CAMS video, SonotaCo video, and EDMOND video databases, we aim to separate all provable annual meteor showers from each of these databases. We intend to reveal the problems inherent in this procedure and answer the question whether the databases are complete and the methods of separation used are reliable. We aim to evaluate the statistical significance of each separated shower. In this respect, we intend to give a list of reliably separated showers rather than a list of the maximum possible number of showers. Methods: To separate the showers, we simultaneously used two methods. The use of two methods enables us to compare their results, and this can indicate the reliability of the methods. To evaluate the statistical significance, we suggest a new method based on the ideas of the break-point method. Results: We give a compilation of the showers from all four databases using both methods. Using the first (second) method, we separated 107 (133) showers, which are in at least one of the databases used. These relatively low numbers are a consequence of discarding any candidate shower with a poor statistical significance. Most of the separated showers were identified as meteor showers from the IAU MDC list of all showers. Many of them were identified as several of the showers in the list. This proves that many showers have been named multiple times with different names. Conclusions: At present, a prevailing share of existing annual showers can be found in the data and confirmed when we use a combination of results from large databases. However, to gain a complete list of showers, we need more-complete meteor databases than the most extensive databases currently are. We also still need a more sophisticated method to separate showers and evaluate their statistical significance. Tables A.1 and A.2 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A40
Edens, John F; Penson, Brittany N; Ruchensky, Jared R; Cox, Jennifer; Smith, Shannon Toney
2016-12-01
Published research suggests that most violence risk assessment tools have relatively high levels of interrater reliability, but recent evidence of inconsistent scores among forensic examiners in adversarial settings raises concerns about the "field reliability" of such measures. This study specifically examined the reliability of Violence Risk Appraisal Guide (VRAG) scores in Canadian criminal cases identified in the legal database, LexisNexis. Over 250 reported cases were located that made mention of the VRAG, with 42 of these cases containing 2 or more scores that could be submitted to interrater reliability analyses. Overall, scores were skewed toward higher risk categories. The intraclass correlation (ICCA1) was .66, with pairs of forensic examiners placing defendants into the same VRAG risk "bin" in 68% of the cases. For categorical risk statements (i.e., low, moderate, high), examiners provided converging assessment results in most instances (86%). In terms of potential predictors of rater disagreement, there was no evidence for adversarial allegiance in our sample. Rater disagreement in the scoring of 1 VRAG item (Psychopathy Checklist-Revised; Hare, 2003), however, strongly predicted rater disagreement in the scoring of the VRAG (r = .58). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Modification Site Localization in Peptides.
Chalkley, Robert J
2016-01-01
There are a large number of search engines designed to take mass spectrometry fragmentation spectra and match them to peptides from proteins in a database. These peptides could be unmodified, but they could also bear modifications that were added biologically or during sample preparation. As a measure of reliability for the peptide identification, software normally calculates how likely a given quality of match could have been achieved at random, most commonly through the use of target-decoy database searching (Elias and Gygi, Nat Methods 4(3): 207-214, 2007). Matching the correct peptide but with the wrong modification localization is not a random match, so results with this error will normally still be assessed as reliable identifications by the search engine. Hence, an extra step is required to determine site localization reliability, and the software approaches to measure this are the subject of this part of the chapter.
Numerical Databases: Their Vital Role in Information Science.
ERIC Educational Resources Information Center
Carter, G. C.
1985-01-01
In this first of two installments, a description of the interactions of numerical databases (NDBs) for science with the various community sectors highlights data evaluation and the roles that NDBs will likely play. Twenty-four studies and articles dealing with needs for reliable physical, chemical, and mechanical property data are noted. (EJS)
Lindsley, Kristina; Li, Tianjing; Ssemanda, Elizabeth; Virgili, Gianni; Dickersin, Kay
2016-01-01
Topic Are existing systematic reviews of interventions for age-related macular degeneration incorporated into clinical practice guidelines? Clinical relevance High-quality systematic reviews should be used to underpin evidence-based clinical practice guidelines and clinical care. We have examined the reliability of systematic reviews of interventions for age-related macular degeneration (AMD) and described the main findings of reliable reviews in relation to clinical practice guidelines. Methods Eligible publications are systematic reviews of the effectiveness of treatment interventions for AMD. We searched a database of systematic reviews in eyes and vision and employed no language or date restrictions; the database is up-to-date as of May 6, 2014. Two authors independently screened records for eligibility and abstracted and assessed the characteristics and methods of each review. We classified reviews as “reliable” when they reported eligibility criteria, comprehensive searches, appraisal of methodological quality of included studies, appropriate statistical methods for meta-analysis, and conclusions based on results. We mapped treatment recommendations from the American Academy of Ophthalmology Preferred Practice Patterns (AAO PPP) for AMD to the identified systematic reviews and assessed whether any reliable systematic review was cited or could have been cited to support each treatment recommendation. Results Of 1,570 systematic reviews in our database, 47 met our inclusion criteria. Most of the systematic reviews targeted neovascular AMD and investigated anti-vascular endothelial growth factor (anti-VEGF) interventions, dietary supplements or photodynamic therapy. We classified over two-thirds (33/47) of the reports as reliable. The quality of reporting varied, with criteria for reliable reporting met more often for Cochrane reviews and for reviews whose authors disclosed conflicts of interest. Although most systematic reviews were reliable, anti-VEGF agents and photodynamic therapy were the only interventions identified as effective by reliable reviews. Of 35 treatment recommendations extracted from the AAO PPP, 15 could have been supported with reliable systematic reviews; however, only one recommendation had an accompanying intervention systematic review citation, which we assessed as a reliable systematic review. No reliable systematic review was identified for 20 treatment recommendations, highlighting areas of evidence gaps. Conclusions For AMD, reliable systematic reviews exist for many treatment recommendations in the AAO PPP and should be used to support these recommendations. We also identified areas where no high-level evidence exists. Mapping clinical practice guidelines to existing systematic reviews is one way to highlight areas where evidence generation or evidence synthesis is either available or needed. PMID:26804762
Harju, Inka; Lange, Christoph; Kostrzewa, Markus; Maier, Thomas; Rantakokko-Jalava, Kaisu; Haanperä, Marjo
2017-03-01
Reliable distinction of Streptococcus pneumoniae and viridans group streptococci is important because of the different pathogenic properties of these organisms. Differentiation between S. pneumoniae and closely related Sreptococcus mitis species group streptococci has always been challenging, even when using such modern methods as 16S rRNA gene sequencing or matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. In this study, a novel algorithm combined with an enhanced database was evaluated for differentiation between S. pneumoniae and S. mitis species group streptococci. One hundred one clinical S. mitis species group streptococcal strains and 188 clinical S. pneumoniae strains were identified by both the standard MALDI Biotyper database alone and that combined with a novel algorithm. The database update from 4,613 strains to 5,627 strains drastically improved the differentiation of S. pneumoniae and S. mitis species group streptococci: when the new database version containing 5,627 strains was used, only one of the 101 S. mitis species group isolates was misidentified as S. pneumoniae , whereas 66 of them were misidentified as S. pneumoniae when the earlier 4,613-strain MALDI Biotyper database version was used. The updated MALDI Biotyper database combined with the novel algorithm showed even better performance, producing no misidentifications of the S. mitis species group strains as S. pneumoniae All S. pneumoniae strains were correctly identified as S. pneumoniae with both the standard MALDI Biotyper database and the standard MALDI Biotyper database combined with the novel algorithm. This new algorithm thus enables reliable differentiation between pneumococci and other S. mitis species group streptococci with the MALDI Biotyper. Copyright © 2017 American Society for Microbiology.
Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian
2017-03-01
Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf
2018-06-18
In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision. The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database. From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment. Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data. · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancey, P.; Logg, C.
DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from themore » information.« less
Comparison of hospital databases on antibiotic consumption in France, for a single management tool.
Henard, S; Boussat, S; Demoré, B; Clément, S; Lecompte, T; May, T; Rabaud, C
2014-07-01
The surveillance of antibiotic use in hospitals and of data on resistance is an essential measure for antibiotic stewardship. There are 3 national systems in France to collect data on antibiotic use: DREES, ICATB, and ATB RAISIN. We compared these databases and drafted recommendations for the creation of an optimized database of information on antibiotic use, available to all concerned personnel: healthcare authorities, healthcare facilities, and healthcare professionals. We processed and analyzed the 3 databases (2008 data), and surveyed users. The qualitative analysis demonstrated major discrepancies in terms of objectives, healthcare facilities, participation rate, units of consumption, conditions for collection, consolidation, and control of data, and delay before availability of results. The quantitative analysis revealed that the consumption data for a given healthcare facility differed from one database to another, challenging the reliability of data collection. We specified user expectations: to compare consumption and resistance data, to carry out benchmarking, to obtain data on the prescribing habits in healthcare units, or to help understand results. The study results demonstrated the need for a reliable, single, and automated tool to manage data on antibiotic consumption compared with resistance data on several levels (national, regional, healthcare facility, healthcare units), providing rapid local feedback and educational benchmarking. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
The Reliability of Methodological Ratings for speechBITE Using the PEDro-P Scale
ERIC Educational Resources Information Center
Murray, Elizabeth; Power, Emma; Togher, Leanne; McCabe, Patricia; Munro, Natalie; Smith, Katherine
2013-01-01
Background: speechBITE (http://www.speechbite.com) is an online database established in order to help speech and language therapists gain faster access to relevant research that can used in clinical decision-making. In addition to containing more than 3000 journal references, the database also provides methodological ratings on the PEDro-P (an…
Design, Development, and Maintenance of the GLOBE Program Website and Database
NASA Technical Reports Server (NTRS)
Brummer, Renate; Matsumoto, Clifford
2004-01-01
This is a 1-year (Fy 03) proposal to design and develop enhancements, implement improved efficiency and reliability, and provide responsive maintenance for the operational GLOBE (Global Learning and Observations to Benefit the Environment) Program website and database. This proposal is renewable, with a 5% annual inflation factor providing an approximate cost for the out years.
Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report
2007-02-05
34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about
[SCREENING OF NUTRITIONAL STATUS AMONG ELDERLY PEOPLE AT FAMILY MEDICINE].
Račić, M; Ivković, N; Kusmuk, S
2015-11-01
The prevalence of malnutrition in elderly is high. Malnutrition or risk of malnutrition can be detected by use of nutritional screening or assessment tools. This systematic review aimed to identify tools that would be reliable, valid, sensitive and specific for nutritional status screening in patients older than 65 at family medicine. The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Studies were retrieved using MEDLINE (via Ovid), PubMed and Cochrane Library electronic databases and by manual searching of relevant articles listed in reference list of key publications. The electronic databases were searched using defined key words adapted to each database and using MESH terms. Manual revision of reviews and original articles was performed using Electronic Journals Library. Included studies involved development and validation of screening tools in the community-dwelling elderly population. The tools, subjected to validity and reliability testing for use in the community-dwelling elderly population were Mini Nutritional Assessment (MNA), Mini Nutritional Assessment-Short Form (MNA-SF), Nutrition Screening Initiative (NSI), which includes DETERMINE list, Level I and II Screen, Seniors in the Community: Risk Evaluation for Eating, and Nutrition (SCREEN I and SCREEN II), Subjective Global Assessment (SGA), Nutritional Risk Index (NRI), and Malaysian and South African tool. MNA and MNA-SF appear to have highest reliability and validity for screening of community-dwelling elderly, while the reliability and validity of SCREEN II are good. The authors conclude that whilst several tools have been developed, most have not undergone extensive testing to demonstrate their ability to identify nutritional risk. MNA and MNA-SF have the highest reliability and validity for screening of nutritional status in the community-dwelling elderly, and the reliability and validity of SCREEN II are satisfactory. These instruments also contain all three nutritional status indicators and are practical for use in family medicine. However, the gold standard for screening cannot be set because testing of reliability and continuous validation in the study with a higher level of evidence need to be conducted in family medicine.
The Reliability of College Grades
ERIC Educational Resources Information Center
Beatty, Adam S.; Walmsley, Philip T.; Sackett, Paul R.; Kuncel, Nathan R.; Koch, Amanda J.
2015-01-01
Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year…
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.
1991-07-01
Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.
Jordan, Kelvin; Clarke, Alexandra M; Symmons, Deborah PM; Fleming, Douglas; Porcheret, Mark; Kadam, Umesh T; Croft, Peter
2007-01-01
Background Primary care consultation data are an important source of information on morbidity prevalence. It is not known how reliable such figures are. Aim To compare annual consultation prevalence estimates for musculoskeletal conditions derived from four general practice consultation databases. Design of study Retrospective study of general practice consultation records. Setting Three national general practice consultation databases: i) Fourth Morbidity Statistics from General Practice (MSGP4, 1991/92), ii) Royal College of General Practitioners Weekly Returns Service (RCGP WRS, 2001), and iii) General Practice Research Database (GPRD, 1991 and 2001); and one regional database (Consultations in Primary Care Archive, 2001). Method Age-sex standardised persons consulting annual prevalence rates for musculoskeletal conditions overall, rheumatoid arthritis, osteoarthritis and arthralgia were derived for patients aged 15 years and over. Results GPRD prevalence of any musculoskeletal condition, rheumatoid arthritis and osteoarthritis was lower than that of the other databases. This is likely to be due to GPs not needing to record every consultation made for a chronic condition. MSGP4 gave the highest prevalence for osteoarthritis but low prevalence of arthralgia which reflects encouragement for GPs to use diagnostic rather than symptom codes. Conclusion Considerable variation exists in consultation prevalence estimates for musculoskeletal conditions. Researchers and health service planners should be aware that estimates of disease occurrence based on consultation will be influenced by choice of database. This is likely to be true for other chronic diseases and where alternative symptom labels exist for a disease. RCGP WRS may give the most reliable prevalence figures for musculoskeletal and other chronic diseases. PMID:17244418
ERIC Educational Resources Information Center
Reardon, Sean F.; Kalogrides, Demetra; Ho, Andrew D.
2017-01-01
There is no comprehensive database of U.S. district-level test scores that is comparable across states. We describe and evaluate a method for constructing such a database. First, we estimate linear, reliability-adjusted linking transformations from state test score scales to the scale of the National Assessment of Educational Progress (NAEP). We…
Herpes zoster surveillance using electronic databases in the Valencian Community (Spain)
2013-01-01
Background Epidemiologic data of Herpes Zoster (HZ) disease in Spain are scarce. The objective of this study was to assess the epidemiology of HZ in the Valencian Community (Spain), using outpatient and hospital electronic health databases. Methods Data from 2007 to 2010 was collected from computerized health databases of a population of around 5 million inhabitants. Diagnoses were recorded by physicians using the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). A sample of medical records under different criteria was reviewed by a general practitioner, to assess the reliability of codification. Results The average annual incidence of HZ was 4.60 per 1000 persons-year (PY) for all ages (95% CI: 4.57-4.63), is more frequent in women [5.32/1000PY (95% CI: 5.28-5.37)] and is strongly age-related, with a peak incidence at 70-79 years. A total of 7.16/1000 cases of HZ required hospitalization. Conclusions Electronic health database used in the Valencian Community is a reliable electronic surveillance tool for HZ disease and will be useful to define trends in disease burden before and after HZ vaccine introduction. PMID:24094135
Barnes, Brian B.; Wilson, Michael B.; Carr, Peter W.; Vitha, Mark F.; Broeckling, Corey D.; Heuberger, Adam L.; Prenni, Jessica; Janis, Gregory C.; Corcoran, Henry; Snow, Nicholas H.; Chopra, Shilpi; Dhandapani, Ramkumar; Tawfall, Amanda; Sumner, Lloyd W.; Boswell, Paul G.
2014-01-01
Gas chromatography-mass spectrometry (GC-MS) is a primary tool used to identify compounds in complex samples. Both mass spectra and GC retention times are matched to those of standards, but it is often impractical to have standards on hand for every compound of interest, so we must rely on shared databases of MS data and GC retention information. Unfortunately, retention databases (e.g. linear retention index libraries) are experimentally restrictive, notoriously unreliable, and strongly instrument dependent, relegating GC retention information to a minor, often negligible role in compound identification despite its potential power. A new methodology called “retention projection” has great potential to overcome the limitations of shared chromatographic databases. In this work, we tested the reliability of the methodology in five independent laboratories. We found that even when each lab ran nominally the same method, the methodology was 3-fold more accurate than retention indexing because it properly accounted for unintentional differences between the GC-MS systems. When the labs used different methods of their own choosing, retention projections were 4- to 165-fold more accurate. More importantly, the distribution of error in the retention projections was predictable across different methods and labs, thus enabling automatic calculation of retention time tolerance windows. Tolerance windows at 99% confidence were generally narrower than those widely used even when physical standards are on hand to measure their retention. With its high accuracy and reliability, the new retention projection methodology makes GC retention a reliable, precise tool for compound identification, even when standards are not available to the user. PMID:24205931
Davis, Chester L; Pierce, John R; Henderson, William; Spencer, C David; Tyler, Christine; Langberg, Robert; Swafford, Jennan; Felan, Gladys S; Kearns, Martha A; Booker, Brigitte
2007-04-01
The Office of the Medical Inspector of the Department of Veterans Affairs (VA) studied the reliability of data collected by the VA's National Surgical Quality Improvement Program (NSQIP). The study focused on case selection bias, accuracy of reports on patients who died, and interrater reliability measurements of patient risk variables and outcomes. Surgical data from a sample of 15 VA medical centers were analyzed. For case selection bias, reviewers applied NSQIP criteria to include or exclude 2,460 patients from the database, comparing their results with those of NSQIP staff. For accurate reporting of patients who died, reviewers compared Social Security numbers of 10,444 NSQIP records with those found in the VA Beneficiary Identification and Records Locator Subsystem, VA Patient Treatment Files, and Social Security Administration death files. For measurement of interrater reliability, reviewers reabstracted 59 variables in each of 550 patient medical records that also were recorded in the NSQIP database. On case selection bias, the reviewers agreed with NSQIP decisions on 2,418 (98%) of the 2,460 cases. Computer record matching identified 4 more deaths than the NSQIP total of 198, a difference of about 2%. For 52 of the categorical variables, agreement, uncorrected for chance, was 96%. For 48 of 52 categorical variables, kappas ranged from 0.61 to 1.0 (substantial to almost perfect agreement); none of the variables had kappas of less than 0.20 (slight to poor agreement). This sample of medical centers shows adherence to criteria in selecting cases for the NSQIP database, for reporting deaths, and for collecting patient risk variables.
NASA Astrophysics Data System (ADS)
Michel-Sendis, Franco; Martinez-González, Jesus; Gauld, Ian
2017-09-01
SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF) samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA). The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.
NASA Astrophysics Data System (ADS)
Kutzleb, C. D.
1997-02-01
The high incidence of recidivism (repeat offenders) in the criminal population makes the use of the IAFIS III/FBI criminal database an important tool in law enforcement. The problems and solutions employed by IAFIS III/FBI criminal subject searches are discussed for the following topics: (1) subject search selectivity and reliability; (2) the difficulty and limitations of identifying subjects whose anonymity may be a prime objective; (3) database size, search workload, and search response time; (4) techniques and advantages of normalizing the variability in an individual's name and identifying features into identifiable and discrete categories; and (5) the use of database demographics to estimate the likelihood of a match between a search subject and database subjects.
Study of complete interconnect reliability for a GaAs MMIC power amplifier
NASA Astrophysics Data System (ADS)
Lin, Qian; Wu, Haifeng; Chen, Shan-ji; Jia, Guoqing; Jiang, Wei; Chen, Chao
2018-05-01
By combining the finite element analysis (FEA) and artificial neural network (ANN) technique, the complete prediction of interconnect reliability for a monolithic microwave integrated circuit (MMIC) power amplifier (PA) at the both of direct current (DC) and alternating current (AC) operation conditions is achieved effectively in this article. As a example, a MMIC PA is modelled to study the electromigration failure of interconnect. This is the first time to study the interconnect reliability for an MMIC PA at the conditions of DC and AC operation simultaneously. By training the data from FEA, a high accuracy ANN model for PA reliability is constructed. Then, basing on the reliability database which is obtained from the ANN model, it can give important guidance for improving the reliability design for IC.
Ingersoll, Christopher G.; Haverland, Pamela S.; Brunson, Eric L.; Canfield, Timothy J.; Dwyer, F. James; Henke, Chris; Kemble, Nile E.; Mount, David R.; Fox, Richard G.
1996-01-01
Procedures are described for calculating and evaluating sediment effect concentrations (SECs) using laboratory data on the toxicity of contaminants associated with field-collected sediment to the amphipod Hyalella azteca and the midge Chironomus riparius. SECs are defined as the concentrations of individual contaminants in sediment below which toxicity is rarely observed and above which toxicity is frequently observed. The objective of the present study was to develop SECs to classify toxicity data for Great Lake sediment samples tested with Hyalella azteca and Chironomus riparius. This SEC database included samples from additional sites across the United States in order to make the database as robust as possible. Three types of SECs were calculated from these data: (1) Effect Range Low (ERL) and Effect Range Median (ERM), (2) Threshold Effect Level (TEL) and Probable Effect Level (PEL), and (3) No Effect Concentration (NEC). We were able to calculate SECs primarily for total metals, simultaneously extracted metals, polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons (PAHs). The ranges of concentrations in sediment were too narrow in our database to adequately evaluate SECs for butyltins, methyl mercury, polychlorinated dioxins and furans, or chlorinated pesticides. About 60 to 80% of the sediment samples in the database are correctly classified as toxic or not toxic depending on type of SEC evaluated. ERMs and ERLs are generally as reliable as paired PELs and TELs at classifying both toxic and non-toxic samples in our database. Reliability of the SECs in terms of correctly classifying sediment samples is similar between ERMs and NECs; however, ERMs minimize Type I error (false positives) relative to ERLs and minimize Type II error (false negatives) relative to NECs. Correct classification of samples can be improved by using only the most reliable individual SECs for chemicals (i.e., those with a higher percentage of correct classification). SECs calculated using sediment concentrations normalized to total organic carbon (TOC) concentrations did not improve the reliability compared to SECs calculated using dry-weight concentrations. The range of TOC concentrations in our database was relatively narrow compared to the ranges of contaminant concentrations. Therefore, normalizing dry-weight concentrations to a relatively narrow range of TOC concentrations had little influence on relative concentra of contaminants among samples. When SECs are used to conduct a preliminary screening to predict the potential for toxicity in the absence of actual toxicity testing, a low number of SEC exceedances should be used to minimize the potential for false negatives; however, the risk of accepting higher false positives is increased.
Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng
2017-05-10
Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H
2017-05-01
A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Vlach, Jiří; Junková, Petra; Karamonová, Ludmila; Blažková, Martina; Fukal, Ladislav
2017-01-01
ABSTRACT In the last decade, strains of the genera Franconibacter and Siccibacter have been misclassified as first Enterobacter and later Cronobacter. Because Cronobacter is a serious foodborne pathogen that affects premature neonates and elderly individuals, such misidentification may not only falsify epidemiological statistics but also lead to tests of powdered infant formula or other foods giving false results. Currently, the main ways of identifying Franconibacter and Siccibacter strains are by biochemical testing or by sequencing of the fusA gene as part of Cronobacter multilocus sequence typing (MLST), but in relation to these strains the former is generally highly difficult and unreliable while the latter remains expensive. To address this, we developed a fast, simple, and most importantly, reliable method for Franconibacter and Siccibacter identification based on intact-cell matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS). Our method integrates the following steps: data preprocessing using mMass software; principal-component analysis (PCA) for the selection of mass spectrum fingerprints of Franconibacter and Siccibacter strains; optimization of the Biotyper database settings for the creation of main spectrum projections (MSPs). This methodology enabled us to create an in-house MALDI MS database that extends the current MALDI Biotyper database by including Franconibacter and Siccibacter strains. Finally, we verified our approach using seven previously unclassified strains, all of which were correctly identified, thereby validating our method. IMPORTANCE We show that the majority of methods currently used for the identification of Franconibacter and Siccibacter bacteria are not able to properly distinguish these strains from those of Cronobacter. While sequencing of the fusA gene as part of Cronobacter MLST remains the most reliable such method, it is highly expensive and time-consuming. Here, we demonstrate a cost-effective and reliable alternative that correctly distinguishes between Franconibacter, Siccibacter, and Cronobacter bacteria and identifies Franconibacter and Siccibacter at the species level. Using intact-cell MALDI-TOF MS, we extend the current MALDI Biotyper database with 11 Franconibacter and Siccibacter MSPs. In addition, the use of our approach is likely to lead to a more reliable identification scheme for Franconibacter and Siccibacter strains and, consequently, a more trustworthy epidemiological picture of their involvement in disease. PMID:28455327
Herlin, Christian; Doucet, Jean Charles; Bigorre, Michèle; Khelifa, Hatem Cheikh; Captier, Guillaume
2013-10-01
Treacher Collins syndrome (TCS) is a severe and complex craniofacial malformation affecting the facial skeleton and soft tissues. The palate as well as the external and middle ear are also affected, but his prognosis is mainly related to neonatal airway management. Methods of zygomatico-orbital reconstruction are numerous and currently use primarily autologous bone, lyophilized cartilage, alloplastic implants, or even free flaps. This work developed a reliable "customized" method of zygomatico-orbital bony reconstruction using a generic reference model tailored to each patient. From a standard computed tomography (CT) acquisition, we studied qualitatively and quantitatively the skeleton of four individuals with TCS whose age was between 6 and 20 years. In parallel, we studied 40 controls at the same age to obtain a morphometric database of reference. Surgical simulation was carried out using validated software used in craniofacial surgery. The zygomatic hypoplasia was very important quantitatively and morphologically in all TCS individuals. Orbital involvement was mainly morphological, with volumes comparable to the controls of the same age. The control database was used to create three-dimensional computer models to be used in the manufacture of cutting guides for autologous cranial bone grafts or alloplastic implants perfectly adapted to each patient's morphology. Presurgical simulation was also used to fabricate custom positioning guides permitting a simple and reliable surgical procedure. The use of a virtual database allowed us to design a reliable and reproducible skeletal reconstruction method for this rare and complex syndrome. The use of presurgical simulation tools seem essential in this type of craniofacial malformation to increase the reliability of these uncommon and complex surgical procedures, and to ensure stable results over time. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Wenner, Joshua B; Norena, Monica; Khan, Nadia; Palepu, Anita; Ayas, Najib T; Wong, Hubert; Dodek, Peter M
2009-09-01
Although reliability of severity of illness and predicted probability of hospital mortality have been assessed, interrater reliability of the abstraction of primary and other intensive care unit (ICU) admitting diagnoses and underlying comorbidities has not been studied. Patient data from one ICU were originally abstracted and entered into an electronic database by an ICU nurse. A research assistant reabstracted patient demographics, ICU admitting diagnoses and underlying comorbidities, and elements of Acute Physiology and Chronic Health Evaluation II (APACHE II) score from 100 random patients of 474 admitted during 2005 using an identical electronic database. Chamberlain's percent positive agreement was used to compare diagnoses and comorbidities between the 2 data abstractors. A kappa statistic was calculated for demographic variables, Glasgow Coma Score, APACHE II chronic health points, and HIV status. Intraclass correlation was calculated for acute physiology points and predicted probability of hospital mortality. Percent positive agreement for ICU primary and other admitting diagnoses ranged from 0% (primary brain injury) to 71% (sepsis), and for underlying comorbidities, from 40% (coronary artery bypass graft) to 100% (HIV). Agreement as measured by kappa statistic was strong for race (0.81) and age points (0.95), moderate for chronic health points (0.50) and HIV (0.66), and poor for Glasgow Coma Score (0.36). Intraclass correlation showed a moderate-high agreement for acute physiology points (0.88) and predicted probability of hospital mortality (0.71). Reliability for ICU diagnoses and elements of the APACHE II score is related to the objectivity of primary data in the medical charts.
Jeannerat, Damien
2017-01-01
The introduction of a universal data format to report the correlation data of 2D NMR spectra such as COSY, HSQC and HMBC spectra will have a large impact on the reliability of structure determination of small organic molecules. These lists of assigned cross peaks will bridge signals found in NMR 1D and 2D spectra and the assigned chemical structure. The record could be very compact, human and computer readable so that it can be included in the supplementary material of publications and easily transferred into databases of scientific literature and chemical compounds. The records will allow authors, reviewers and future users to test the consistency and, in favorable situations, the uniqueness of the assignment of the correlation data to the associated chemical structures. Ideally, the data format of the correlation data should include direct links to the NMR spectra to make it possible to validate their reliability and allow direct comparison of spectra. In order to take the full benefits of their potential, the correlation data and the NMR spectra should therefore follow any manuscript in the review process and be stored in open-access database after publication. Keeping all NMR spectra, correlation data and assigned structures together at all time will allow the future development of validation tools increasing the reliability of past and future NMR data. This will facilitate the development of artificial intelligence analysis of NMR spectra by providing a source of data than can be used efficiently because they have been validated or can be validated by future users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
... is the most reliable way to diagnose syringomyelia. Computer-generated radio waves and a powerful magnetic field ... a searchable database of current and past research projects supported by NIH and other federal agencies. RePORTER ...
NASA Astrophysics Data System (ADS)
Bashev, A.
2012-04-01
Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.
Similarity Search in Large Collections of Biometric Data
2009-10-01
instantaneous identification of a person by converting the biometric into a digital form and then comparing it against a computerized database . They can...combined to get reliable results. Exact match in biometric collections have very little meaning and only a relative ordering of database objects with...running several indices for different aspects of the data, e.g. facial features, fingerprints and palmprints of a person, together. The system then
1994-06-30
tip Opening Displacement (CTOD) Fracture Toughness Measurement". 48 The method has found application in the elastic-plastic fracture mechanics ( EPFM ...68 6.1 Proposed Material Property Database Format and Hierarchy .............. 68 6.2 Sample Application of the Material Property Database...the E 49.05 sub-committee. The relevant quality indicators applicable to the present program are: source of data, statistical basis of data
[Integrated DNA barcoding database for identifying Chinese animal medicine].
Shi, Lin-Chun; Yao, Hui; Xie, Li-Fang; Zhu, Ying-Jie; Song, Jing-Yuan; Zhang, Hui; Chen, Shi-Lin
2014-06-01
In order to construct an integrated DNA barcoding database for identifying Chinese animal medicine, the authors and their cooperators have completed a lot of researches for identifying Chinese animal medicines using DNA barcoding technology. Sequences from GenBank have been analyzed simultaneously. Three different methods, BLAST, barcoding gap and Tree building, have been used to confirm the reliabilities of barcode records in the database. The integrated DNA barcoding database for identifying Chinese animal medicine has been constructed using three different parts: specimen, sequence and literature information. This database contained about 800 animal medicines and the adulterants and closely related species. Unknown specimens can be identified by pasting their sequence record into the window on the ID page of species identification system for traditional Chinese medicine (www. tcmbarcode. cn). The integrated DNA barcoding database for identifying Chinese animal medicine is significantly important for animal species identification, rare and endangered species conservation and sustainable utilization of animal resources.
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
Questioning reliability assessments of health information on social media.
Dalmer, Nicole K
2017-01-01
This narrative review examines assessments of the reliability of online health information retrieved through social media to ascertain whether health information accessed or disseminated through social media should be evaluated differently than other online health information. Several medical, library and information science, and interdisciplinary databases were searched using terms relating to social media, reliability, and health information. While social media's increasing role in health information consumption is recognized, studies are dominated by investigations of traditional (i.e., non-social media) sites. To more richly assess constructions of reliability when using social media for health information, future research must focus on health consumers' unique contexts, virtual relationships, and degrees of trust within their social networks.
Yang, Qi; Franco, Christopher M M; Sorokin, Shirley J; Zhang, Wei
2017-02-02
For sponges (phylum Porifera), there is no reliable molecular protocol available for species identification. To address this gap, we developed a multilocus-based Sponge Identification Protocol (SIP) validated by a sample of 37 sponge species belonging to 10 orders from South Australia. The universal barcode COI mtDNA, 28S rRNA gene (D3-D5), and the nuclear ITS1-5.8S-ITS2 region were evaluated for their suitability and capacity for sponge identification. The highest Bit Score was applied to infer the identity. The reliability of SIP was validated by phylogenetic analysis. The 28S rRNA gene and COI mtDNA performed better than the ITS region in classifying sponges at various taxonomic levels. A major limitation is that the databases are not well populated and possess low diversity, making it difficult to conduct the molecular identification protocol. The identification is also impacted by the accuracy of the morphological classification of the sponges whose sequences have been submitted to the database. Re-examination of the morphological identification further demonstrated and improved the reliability of sponge identification by SIP. Integrated with morphological identification, the multilocus-based SIP offers an improved protocol for more reliable and effective sponge identification, by coupling the accuracy of different DNA markers.
Yang, Qi; Franco, Christopher M. M.; Sorokin, Shirley J.; Zhang, Wei
2017-01-01
For sponges (phylum Porifera), there is no reliable molecular protocol available for species identification. To address this gap, we developed a multilocus-based Sponge Identification Protocol (SIP) validated by a sample of 37 sponge species belonging to 10 orders from South Australia. The universal barcode COI mtDNA, 28S rRNA gene (D3–D5), and the nuclear ITS1-5.8S-ITS2 region were evaluated for their suitability and capacity for sponge identification. The highest Bit Score was applied to infer the identity. The reliability of SIP was validated by phylogenetic analysis. The 28S rRNA gene and COI mtDNA performed better than the ITS region in classifying sponges at various taxonomic levels. A major limitation is that the databases are not well populated and possess low diversity, making it difficult to conduct the molecular identification protocol. The identification is also impacted by the accuracy of the morphological classification of the sponges whose sequences have been submitted to the database. Re-examination of the morphological identification further demonstrated and improved the reliability of sponge identification by SIP. Integrated with morphological identification, the multilocus-based SIP offers an improved protocol for more reliable and effective sponge identification, by coupling the accuracy of different DNA markers. PMID:28150727
Soetens, Oriane; De Bel, Annelies; Echahidi, Fedoua; Vancutsem, Ellen; Vandoorslaer, Kristof; Piérard, Denis
2012-01-01
The performance of matrix-assisted laser desorption–ionization time of flight mass spectrometry (MALDI-TOF MS) for species identification of Prevotella was evaluated and compared with 16S rRNA gene sequencing. Using a Bruker database, 62.7% of the 102 clinical isolates were identified to the species level and 73.5% to the genus level. Extension of the commercial database improved these figures to, respectively, 83.3% and 89.2%. MALDI-TOF MS identification of Prevotella is reliable but needs a more extensive database. PMID:22301022
Iris indexing based on local intensity order pattern
NASA Astrophysics Data System (ADS)
Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo
2017-03-01
In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.
Overview of Historical Earthquake Document Database in Japan and Future Development
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Satake, K.
2014-12-01
In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami heights.
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud
Karimi, Kamran; Vize, Peter D.
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782
Developing a Nursing Database System in Kenya
Riley, Patricia L; Vindigni, Stephen M; Arudo, John; Waudo, Agnes N; Kamenju, Andrew; Ngoya, Japheth; Oywer, Elizabeth O; Rakuom, Chris P; Salmon, Marla E; Kelley, Maureen; Rogers, Martha; St Louis, Michael E; Marum, Lawrence H
2007-01-01
Objective To describe the development, initial findings, and implications of a national nursing workforce database system in Kenya. Principal Findings Creating a national electronic nursing workforce database provides more reliable information on nurse demographics, migration patterns, and workforce capacity. Data analyses are most useful for human resources for health (HRH) planning when workforce capacity data can be linked to worksite staffing requirements. As a result of establishing this database, the Kenya Ministry of Health has improved capability to assess its nursing workforce and document important workforce trends, such as out-migration. Current data identify the United States as the leading recipient country of Kenyan nurses. The overwhelming majority of Kenyan nurses who elect to out-migrate are among Kenya's most qualified. Conclusions The Kenya nursing database is a first step toward facilitating evidence-based decision making in HRH. This database is unique to developing countries in sub-Saharan Africa. Establishing an electronic workforce database requires long-term investment and sustained support by national and global stakeholders. PMID:17489921
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-09-01
Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of 'Theory of Mind' AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. METHODological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability.
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-01-01
Objective: Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. Method: We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability. PMID:27006666
Bridge element deterioration rates.
DOT National Transportation Integrated Search
2008-10-01
This report describes the development of bridge element deterioration rates using the NYSDOT : bridge inspection database using Markov chains and Weibull-based approaches. It is observed : that Weibull-based approach is more reliable for developing b...
[Relational database for urinary stone ambulatory consultation. Assessment of initial outcomes].
Sáenz Medina, J; Páez Borda, A; Crespo Martinez, L; Gómez Dos Santos, V; Barrado, C; Durán Poveda, M
2010-05-01
To create a relational database for monitoring lithiasic patients. We describe the architectural details and the initial results of the statistical analysis. Microsoft Access 2002 was used as template. Four different tables were constructed to gather demographic data (table 1), clinical and laboratory findings (table 2), stone features (table 3) and therapeutic approach (table 4). For a reliability analysis of the database the number of correctly stored data was gathered. To evaluate the performance of the database, a prospective analysis was conducted, from May 2004 to August 2009, on 171 stone free patients after treatment (EWSL, surgery or medical) from a total of 511 patients stored in the database. Lithiasic status (stone free or stone relapse) was used as primary end point, while demographic factors (age, gender), lithiasic history, upper urinary tract alterations and characteristics of the stone (side, location, composition and size) were considered as predictive factors. An univariate analysis was conducted initially by chi square test and supplemented by Kaplan Meier estimates for time to stone recurrence. A multiple Cox proportional hazards regression model was generated to jointly assess the prognostic value of the demographic factors and the predictive value of stones characteristics. For the reliability analysis 22,084 data were available corresponding to 702 consultations on 511 patients. Analysis of data showed a recurrence rate of 85.4% (146/171, median time to recurrence 608 days, range 70-1758). In the univariate and multivariate analysis, none of the factors under consideration had a significant effect on recurrence rate (p=ns). The relational database is useful for monitoring patients with urolithiasis. It allows easy control and update, as well as data storage for later use. The analysis conducted for its evaluation showed no influence of demographic factors and stone features on stone recurrence.
NASA Astrophysics Data System (ADS)
Garcia-Mayordomo, Julian; Martin-Banda, Raquel; Insua-Arevalo, Juan Miguel; Alvarez-Gomez, Jose Antonio; Martinez-Diaz, Jose Jesus
2017-04-01
Since the Quaternary Active Faults Database of Iberia (QAFI) was released in February 2012 a number of studies aimed at producing seismic hazard assessments have made use of it. We will present a summary of the shortcomings and advantages that were faced when QAFI was considered in different seismic hazard studies. These include the production of the new official seismic hazard map of Spain, performed in the view of the foreseen adoption of Eurocode-8 throughout 2017. The QAFI database was considered as a complementary source of information for designing the seismogenic source-zone models used in the calculations, and particularly for the estimation of maximum magnitude distribution in each zone, as well as for assigning the predominant rupture mechanism based on style of faulting. We will also review the different results obtained by other studies that considered QAFI faults as independent seismogenic-sources in opposition to source-zones, revealing, on one hand, the crucial importance of data-reliability and, on the other, the very much influence that ground motion attenuation models have on the actual impact of fault-sources on hazard results. Finally, we will present briefly the updated version of the database (QAFI v.3, 2015), which includes an original scheme for evaluating the reliability of fault seismic parameters specifically devised to facilitate decision-making to seismic hazard practitioners.
Paksi, Borbala; Demetrovics, Zsolt; Magi, Anna; Felvinczi, Katalin
2017-06-01
This paper introduces the methods and methodological findings of the National Survey on Addiction Problems in Hungary (NSAPH 2015). Use patterns of smoking, alcohol use and other psychoactive substances were measured as well as that of certain behavioural addictions (problematic gambling - PGSI, DSM-V, eating disorders - SCOFF, problematic internet use - PIUQ, problematic on-line gaming - POGO, problematic social media use - FAS, exercise addictions - EAI-HU, work addiction - BWAS, compulsive buying - CBS). The paper describes the applied measurement techniques, sample selection, recruitment of respondents and the data collection strategy as well. Methodological results of the survey including reliability and validity of the measures are reported. The NSAPH 2015 research was carried out on a nationally representative sample of the Hungarian adult population aged 16-64 yrs (gross sample 2477, net sample 2274 persons) with the age group of 18-34 being overrepresented. Statistical analysis of the weight-distribution suggests that weighting did not create any artificial distortion in the database leaving the representativeness of the sample unaffected. The size of the weighted sample of the 18-64 years old adult population is 1490 persons. The extent of the theoretical margin of error in the weighted sample is ±2,5%, at a reliability level of 95% which is in line with the original data collection plans. Based on the analysis of reliability and the extent of errors beyond sampling within the context of the database we conclude that inconsistencies create relatively minor distortions in cumulative prevalence rates; consequently the database makes possible the reliable estimation of risk factors related to different substance use behaviours. The reliability indexes of measurements used for prevalence estimates of behavioural addictions proved to be appropriate, though the psychometric features in some cases suggest the presence of redundant items. The comparison of parameters of errors beyond sample selection in the current and previous data collections indicates that trend estimates and their interpretation requires outstanding attention and in some cases even correction procedures might become necessary.
Bodner, Martin; Bastisch, Ingo; Butler, John M; Fimmers, Rolf; Gill, Peter; Gusmão, Leonor; Morling, Niels; Phillips, Christopher; Prinz, Mechthild; Schneider, Peter M; Parson, Walther
2016-09-01
The statistical evaluation of autosomal Short Tandem Repeat (STR) genotypes is based on allele frequencies. These are empirically determined from sets of randomly selected human samples, compiled into STR databases that have been established in the course of population genetic studies. There is currently no agreed procedure of performing quality control of STR allele frequency databases, and the reliability and accuracy of the data are largely based on the responsibility of the individual contributing research groups. It has been demonstrated with databases of haploid markers (EMPOP for mitochondrial mtDNA, and YHRD for Y-chromosomal loci) that centralized quality control and data curation is essential to minimize error. The concepts employed for quality control involve software-aided likelihood-of-genotype, phylogenetic, and population genetic checks that allow the researchers to compare novel data to established datasets and, thus, maintain the high quality required in forensic genetics. Here, we present STRidER (http://strider.online), a publicly available, centrally curated online allele frequency database and quality control platform for autosomal STRs. STRidER expands on the previously established ENFSI DNA WG STRbASE and applies standard concepts established for haploid and autosomal markers as well as novel tools to reduce error and increase the quality of autosomal STR data. The platform constitutes a significant improvement and innovation for the scientific community, offering autosomal STR data quality control and reliable STR genotype estimates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Time-Critical Database Conditions Data-Handling for the CMS Experiment
NASA Astrophysics Data System (ADS)
De Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
2011-08-01
Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.
Developing a database for pedestrians' earthquake emergency evacuation in indoor scenarios.
Zhou, Junxue; Li, Sha; Nie, Gaozhong; Fan, Xiwei; Tan, Jinxian; Li, Huayue; Pang, Xiaoke
2018-01-01
With the booming development of evacuation simulation software, developing an extensive database in indoor scenarios for evacuation models is imperative. In this paper, we conduct a qualitative and quantitative analysis of the collected videotapes and aim to provide a complete and unitary database of pedestrians' earthquake emergency response behaviors in indoor scenarios, including human-environment interactions. Using the qualitative analysis method, we extract keyword groups and keywords that code the response modes of pedestrians and construct a general decision flowchart using chronological organization. Using the quantitative analysis method, we analyze data on the delay time, evacuation speed, evacuation route and emergency exit choices. Furthermore, we study the effect of classroom layout on emergency evacuation. The database for indoor scenarios provides reliable input parameters and allows the construction of real and effective constraints for use in software and mathematical models. The database can also be used to validate the accuracy of evacuation models.
A new feature constituting approach to detection of vocal fold pathology
NASA Astrophysics Data System (ADS)
Hariharan, M.; Polat, Kemal; Yaacob, Sazali
2014-08-01
In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.
CORE-Hom: a powerful and exhaustive database of clinical trials in homeopathy.
Clausen, Jürgen; Moss, Sian; Tournier, Alexander; Lüdtke, Rainer; Albrecht, Henning
2014-10-01
The CORE-Hom database was created to answer the need for a reliable and publicly available source of information in the field of clinical research in homeopathy. As of May 2014 it held 1048 entries of clinical trials, observational studies and surveys in the field of homeopathy, including second publications and re-analyses. 352 of the trials referenced in the database were published in peer reviewed journals, 198 of which were randomised controlled trials. The most often used remedies were Arnica montana (n = 103) and Traumeel(®) (n = 40). The most studied medical conditions were respiratory tract infections (n = 126) and traumatic injuries (n = 110). The aim of this article is to introduce the database to the public, describing and explaining the interface, features and content of the CORE-Hom database. Copyright © 2014 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.
Fear of cancer recurrence: a systematic literature review of self-report measures.
Thewes, Belinda; Butow, Phillis; Zachariae, Robert; Christensen, Soren; Simard, Sébastien; Gotay, Carolyn
2012-06-01
Prior research has shown that many cancer survivors experience ongoing fears of cancer recurrence (FCR) and that this chronic uncertainty of health status during and after cancer treatment can be a significant psychological burden. The field of research on FCR is an emerging area of investigation in the cancer survivorship literature, and several standardised instruments for its assessment have been developed. This review aims to identify all available FCR-specific questionnaires and subscales and critically appraise their properties. A systematic review was undertaken to identify instruments measuring FCR. Relevant studies were identified via Medline (1950-2010), CINAHL (1982-2010), PsycINFO (1967-2010) and AMED (1985-2010) databases, reference lists of articles and reviews, grey literature databases and consultation with experts in the field. The Medical Outcomes Trust criteria were used to examine the psychometric properties of the questionnaires. A total of 20 relevant multi-item measures were identified. The majority of instruments have demonstrated reliability and preliminary evidence of validity. Relatively few brief measures (2-10 items) were found to have comprehensive validation and reliability data available. Several valid and reliable longer measures (>10 items) are available. Three have developed short forms that may prove useful as screening tools. This analysis indicated that further refinement and validation of existing instruments is required. Valid and reliable instruments are needed for both research and clinical care. Copyright © 2011 John Wiley & Sons, Ltd.
Human Variome Project Quality Assessment Criteria for Variation Databases.
Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter
2016-06-01
Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.
NASA Technical Reports Server (NTRS)
Bertelrud, Arild; Johnson, Sherylene; Anders, J. B. (Technical Monitor)
2002-01-01
A 2-D (two dimensional) high-lift system experiment was conducted in August of 1996 in the Low Turbulence Pressure Tunnel at NASA Langley Research Center, Hampton, VA. The purpose of the experiment was to obtain transition measurements on a three element high-lift system for CFD (computational fluid dynamics) code validation studies. A transition database has been created using the data from this experiment. The present report details how the hot-film data and the related pressure data are organized in the database. Data processing codes to access the data in an efficient and reliable manner are described and limited examples are given on how to access the database and store acquired information.
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos
1992-01-01
The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buche, D. L.; Perry, S.
This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.
Gajewski, Byron J.; Lee, Robert; Dunton, Nancy
2012-01-01
Data Envelopment Analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency (Hollingsworth, 2008), but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized (Gajewski, Lee, Bott, Piamjariyakul and Taunton, 2009; Ruggiero, 2004). We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® (NDNQI®) to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible. PMID:23328796
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.
Karimi, Kamran; Vize, Peter D
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.
Real-time traffic sign detection and recognition
NASA Astrophysics Data System (ADS)
Herbschleb, Ernst; de With, Peter H. N.
2009-01-01
The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput
Jaïdi, Faouzi; Labbene-Ayachi, Faten; Bouhoula, Adel
2016-12-01
Nowadays, e-healthcare is a main advancement and upcoming technology in healthcare industry that contributes to setting up automated and efficient healthcare infrastructures. Unfortunately, several security aspects remain as main challenges towards secure and privacy-preserving e-healthcare systems. From the access control perspective, e-healthcare systems face several issues due to the necessity of defining (at the same time) rigorous and flexible access control solutions. This delicate and irregular balance between flexibility and robustness has an immediate impact on the compliance of the deployed access control policy. To address this issue, the paper defines a general framework to organize thinking about verifying, validating and monitoring the compliance of access control policies in the context of e-healthcare databases. We study the problem of the conformity of low level policies within relational databases and we particularly focus on the case of a medical-records management database defined in the context of a Medical Information System. We propose an advanced solution for deploying reliable and efficient access control policies. Our solution extends the traditional lifecycle of an access control policy and allows mainly managing the compliance of the policy. We refer to an example to illustrate the relevance of our proposal.
Analysis of human serum phosphopeptidome by a focused database searching strategy.
Zhu, Jun; Wang, Fangjun; Cheng, Kai; Song, Chunxia; Qin, Hongqiang; Hu, Lianghai; Figeys, Daniel; Ye, Mingliang; Zou, Hanfa
2013-01-14
As human serum is an important source for early diagnosis of many serious diseases, analysis of serum proteome and peptidome has been extensively performed. However, the serum phosphopeptidome was less explored probably because the effective method for database searching is lacking. Conventional database searching strategy always uses the whole proteome database, which is very time-consuming for phosphopeptidome search due to the huge searching space resulted from the high redundancy of the database and the setting of dynamic modifications during searching. In this work, a focused database searching strategy using an in-house collected human serum pro-peptidome target/decoy database (HuSPep) was established. It was found that the searching time was significantly decreased without compromising the identification sensitivity. By combining size-selective Ti (IV)-MCM-41 enrichment, RP-RP off-line separation, and complementary CID and ETD fragmentation with the new searching strategy, 143 unique endogenous phosphopeptides and 133 phosphorylation sites (109 novel sites) were identified from human serum with high reliability. Copyright © 2012 Elsevier B.V. All rights reserved.
Austvoll-Dahlgren, Astrid; Guttersrud, Øystein; Nsangi, Allen; Semakula, Daniel; Oxman, Andrew D
2017-01-01
Background The Claim Evaluation Tools database contains multiple-choice items for measuring people’s ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable. Objectives To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis. Participants We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis. Results Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty. Conclusion Most of the items conformed well to the Rasch model’s expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims. PMID:28550019
A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases
Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357
Estimating liver cancer deaths in Thailand based on verbal autopsy study.
Waeto, Salwa; Pipatjaturon, Nattakit; Tongkumchum, Phattrawan; Choonpradub, Chamnein; Saelim, Rattikan; Makaje, Nifatamah
2014-01-01
Liver cancer mortality is high in Thailand but utility of related vital statistics is limited due to national vital registration (VR) data being under reported for specific causes of deaths. Accurate methodologies and reliable supplementary data are needed to provide worthy national vital statistics. This study aimed to model liver cancer deaths based on verbal autopsy (VA) study in 2005 to provide more accurate estimates of liver cancer deaths than those reported. The results were used to estimate number of liver cancer deaths during 2000-2009. A verbal autopsy (VA) was carried out in 2005 based on a sample of 9,644 deaths from nine provinces and it provided reliable information on causes of deaths by gender, age group, location of deaths in or outside hospital, and causes of deaths of the VR database. Logistic regression was used to model liver cancer deaths and other variables. The estimated probabilities from the model were applied to liver cancer deaths in the VR database, 2000-2009. Thus, the more accurately VA-estimated numbers of liver cancer deaths were obtained. The model fits the data quite well with sensitivity 0.64. The confidence intervals from statistical model provide the estimates and their precisions. The VA-estimated numbers of liver cancer deaths were higher than the corresponding VR database with inflation factors 1.56 for males and 1.64 for females. The statistical methods used in this study can be applied to available mortality data in developing countries where their national vital registration data are of low quality and supplementary reliable data are available.
A novel method to handle the effect of uneven sampling effort in biodiversity databases.
Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.
2014-01-01
Background Uncovering the complex transcriptional regulatory networks (TRNs) that underlie plant and animal development remains a challenge. However, a vast amount of data from public microarray experiments is available, which can be subject to inference algorithms in order to recover reliable TRN architectures. Results In this study we present a simple bioinformatics methodology that uses public, carefully curated microarray data and the mutual information algorithm ARACNe in order to obtain a database of transcriptional interactions. We used data from Arabidopsis thaliana root samples to show that the transcriptional regulatory networks derived from this database successfully recover previously identified root transcriptional modules and to propose new transcription factors for the SHORT ROOT/SCARECROW and PLETHORA pathways. We further show that these networks are a powerful tool to integrate and analyze high-throughput expression data, as exemplified by our analysis of a SHORT ROOT induction time-course microarray dataset, and are a reliable source for the prediction of novel root gene functions. In particular, we used our database to predict novel genes involved in root secondary cell-wall synthesis and identified the MADS-box TF XAL1/AGL12 as an unexpected participant in this process. Conclusions This study demonstrates that network inference using carefully curated microarray data yields reliable TRN architectures. In contrast to previous efforts to obtain root TRNs, that have focused on particular functional modules or tissues, our root transcriptional interactions provide an overview of the transcriptional pathways present in Arabidopsis thaliana roots and will likely yield a plethora of novel hypotheses to be tested experimentally. PMID:24739361
Bromilow, Sophie; Gethings, Lee A; Buckley, Mike; Bromley, Mike; Shewry, Peter R; Langridge, James I; Clare Mills, E N
2017-06-23
The unique physiochemical properties of wheat gluten enable a diverse range of food products to be manufactured. However, gluten triggers coeliac disease, a condition which is treated using a gluten-free diet. Analytical methods are required to confirm if foods are gluten-free, but current immunoassay-based methods can unreliable and proteomic methods offer an alternative but require comprehensive and well annotated sequence databases which are lacking for gluten. A manually a curated database (GluPro V1.0) of gluten proteins, comprising 630 discrete unique full length protein sequences has been compiled. It is representative of the different types of gliadin and glutenin components found in gluten. An in silico comparison of their coeliac toxicity was undertaken by analysing the distribution of coeliac toxic motifs. This demonstrated that whilst the α-gliadin proteins contained more toxic motifs, these were distributed across all gluten protein sub-types. Comparison of annotations observed using a discovery proteomics dataset acquired using ion mobility MS/MS showed that more reliable identifications were obtained using the GluPro V1.0 database compared to the complete reviewed Viridiplantae database. This highlights the value of a curated sequence database specifically designed to support the proteomic workflows and the development of methods to detect and quantify gluten. We have constructed the first manually curated open-source wheat gluten protein sequence database (GluPro V1.0) in a FASTA format to support the application of proteomic methods for gluten protein detection and quantification. We have also analysed the manually verified sequences to give the first comprehensive overview of the distribution of sequences able to elicit a reaction in coeliac disease, the prevalent form of gluten intolerance. Provision of this database will improve the reliability of gluten protein identification by proteomic analysis, and aid the development of targeted mass spectrometry methods in line with Codex Alimentarius Commission requirements for foods designed to meet the needs of gluten intolerant individuals. Copyright © 2017. Published by Elsevier B.V.
Anatomical landmark asymmetry assessment in the lumbar spine and pelvis: a review of reliability.
Stovall, Bradley Alan; Kumar, Shrawan
2010-01-01
The purpose of this article is to review current research investigating the reliability of bony anatomical landmark positional asymmetry assessment in the lumbar spine and pelvis, to determine the agreement on findings between authors, and to explore future directions in the area to address the significant issues. The databases MEDLINE, CINAHL, AMED, MANTIS, Academic Search Complete, and Web of Knowledge were searched. A total of 23 articles were identified and reviewed, 10 of which met the inclusion criteria. For these 10 articles, the average interexaminer reliability for bony anatomical landmark positional asymmetry assessment was slightly above chance for all landmarks except medial malleolus, which had fair reliability. Interexaminer reliability on average was less than intraexaminer reliability (anterior superior iliac spine, k = 0.128/0.414; posterior superior iliac spine, k = 0.092/0.371). All interexaminer reliability averages were below values of clinical significance. From the current literature review, bony anatomical landmark positional asymmetry assessment in the lumbar spine and pelvis has not been demonstrated to be a reliable assessment method. However, there are unexplored factors that, after standardization, may improve reliability and further the understanding of musculoskeletal palpatory examination.
The reliability of the Australasian Triage Scale: a meta-analysis
Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir
2015-01-01
BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538
Understanding Expenditure Data.
ERIC Educational Resources Information Center
Dyke, Frances L.
2000-01-01
Stresses the importance of common understandings of cost definitions and data collection in order to create reliable databases with optimal utility for inter-institutional analysis. Examines definitions of common expenditure categories, discusses cost-accumulation rules governing financial reporting, and explains differences between direct costs…
Phenotip - a web-based instrument to help diagnosing fetal syndromes antenatally.
Porat, Shay; de Rham, Maud; Giamboni, Davide; Van Mieghem, Tim; Baud, David
2014-12-10
Prenatal ultrasound can often reliably distinguish fetal anatomic anomalies, particularly in the hands of an experienced ultrasonographer. Given the large number of existing syndromes and the significant overlap in prenatal findings, antenatal differentiation for syndrome diagnosis is difficult. We constructed a hierarchic tree of 1140 sonographic markers and submarkers, organized per organ system. Subsequently, a database of prenatally diagnosable syndromes was built. An internet-based search engine was then designed to search the syndrome database based on a single or multiple sonographic markers. Future developments will include a database with magnetic resonance imaging findings as well as further refinements in the search engine to allow prioritization based on incidence of syndromes and markers.
IRIS: A database application system for diseases identification using FTIR spectroscopy
NASA Astrophysics Data System (ADS)
Arshad, Ahmad Zulhilmi; Munajat, Yusof; Ibrahim, Raja Kamarulzaman Raja; Mahmood, Nasrul Humaimi
2015-05-01
Infrared information on diseases identification system (IRIS) is an application for diseases identification and analysis by using Fourier transform infrared (FTIR) spectroscopy. This is the preliminary step to gather information from the secondary data which was extracted from recognized various research and scientific paper, which are combined into a single database as in IRIS for our purpose of study. The importance of this database is to examine the fingerprint differences between normal and diseases cell or tissue. With the implementation of this application is it hopes that the diseases identification using FTIR spectroscopy would be more reliable and may assist either physicians, pathologists, or researchers to diagnose the certain type of disease efficiently.
Establishment of Low Energy Building materials and Equipment Database Based on Property Information
NASA Astrophysics Data System (ADS)
Kim, Yumin; Shin, Hyery; eon Lee, Seung
2018-03-01
The purpose of this study is to provide reliable service of materials information portal through the establishment of public big data by collecting and integrating scattered low energy building materials and equipment data. There were few cases of low energy building materials database in Korea have provided material properties as factors influencing material pricing. The framework of the database was defined referred with Korea On-line E-procurement system. More than 45,000 data were gathered by the specification of entities and with the gathered data, price prediction models for chillers were suggested. To improve the usability of the prediction model, detailed properties should be analysed for each item.
Space solar array reliability: A study and recommendations
NASA Astrophysics Data System (ADS)
Brandhorst, Henry W., Jr.; Rodiek, Julie A.
2008-12-01
Providing reliable power over the anticipated mission life is critical to all satellites; therefore solar arrays are one of the most vital links to satellite mission success. Furthermore, solar arrays are exposed to the harshest environment of virtually any satellite component. In the past 10 years 117 satellite solar array anomalies have been recorded with 12 resulting in total satellite failure. Through an in-depth analysis of satellite anomalies listed in the Airclaim's Ascend SpaceTrak database, it is clear that solar array reliability is a serious, industry-wide issue. Solar array reliability directly affects the cost of future satellites through increased insurance premiums and a lack of confidence by investors. Recommendations for improving reliability through careful ground testing, standardization of testing procedures such as the emerging AIAA standards, and data sharing across the industry will be discussed. The benefits of creating a certified module and array testing facility that would certify in-space reliability will also be briefly examined. Solar array reliability is an issue that must be addressed to both reduce costs and ensure continued viability of the commercial and government assets on orbit.
Etude thermo-hydraulique de l'ecoulement du moderateur dans le reacteur CANDU-6
NASA Astrophysics Data System (ADS)
Mehdi Zadeh, Foad
Etant donne la taille (6,0 m x 7,6 m) ainsi que le domaine multiplement connexe qui caracterisent la cuve des reacteurs CANDU-6 (380 canaux dans la cuve), la physique qui gouverne le comportement du fluide moderateur est encore mal connue de nos jours. L'echantillonnage de donnees dans un reacteur en fonction necessite d'apporter des changements a la configuration de la cuve du reacteur afin d'y inserer des sondes. De plus, la presence d'une zone intense de radiations empeche l'utilisation des capteurs courants d'echantillonnage. En consequence, l'ecoulement du moderateur doit necessairement etre etudie a l'aide d'un modele experimental ou d'un modele numerique. Pour ce qui est du modele experimental, la fabrication et la mise en fonction de telles installations coutent tres cher. De plus, les parametres de la mise a l'echelle du systeme pour fabriquer un modele experimental a l'echelle reduite sont en contradiction. En consequence, la modelisation numerique reste une alternative importante. Actuellement, l'industrie nucleaire utilise une approche numerique, dite de milieu poreux, qui approxime le domaine par un milieu continu ou le reseau des tubes est remplace par des resistances hydrauliques distribuees. Ce modele est capable de decrire les phenomenes macroscopiques de l'ecoulement, mais ne tient pas compte des effets locaux ayant un impact sur l'ecoulement global, tel que les distributions de temperatures et de vitesses a proximite des tubes ainsi que des instabilites hydrodynamiques. Dans le contexte de la surete nucleaire, on s'interesse aux effets locaux autour des tubes de calandre. En effet, des simulations faites par cette approche predisent que l'ecoulement peut prendre plusieurs configurations hydrodynamiques dont, pour certaines, l'ecoulement montre un comportement asymetrique au sein de la cuve. Ceci peut provoquer une ebullition du moderateur sur la paroi des canaux. Dans de telles conditions, le coefficient de reactivite peut varier de maniere importante, se traduisant par l'accroissement de la puissance du reacteur. Ceci peut avoir des consequences majeures pour la surete nucleaire. Une modelisation CFD (Computational Fluid Dynamics) detaillee tenant compte des effets locaux s'avere donc necessaire. Le but de ce travail de recherche est de modeliser le comportement complexe de l'ecoulement du moderateur au sein de la cuve d'un reacteur nucleaire CANDU-6, notamment a proximite des tubes de calandre. Ces simulations servent a identifier les configurations possibles de l'ecoulement dans la calandre. Cette etude consiste ainsi a formuler des bases theoriques a l'origine des instabilites macroscopiques du moderateur, c.-a-d. des mouvements asymetriques qui peuvent provoquer l'ebullition du moderateur. Le defi du projet est de determiner l'impact de ces configurations de l'ecoulement sur la reactivite du reacteur CANDU-6.
Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.
Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing
2018-04-06
Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.
Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen
2013-02-01
The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.
Burisch, Johan; Gisbert, Javier P; Siegmund, Britta; Bettenworth, Dominik; Thomsen, Sandra Bohn; Cleynen, Isabelle; Cremer, Anneline; Ding, Nik John Sheng; Furfaro, Federica; Galanopoulos, Michail; Grunert, Philip Christian; Hanzel, Jurij; Ivanovski, Tamara Knezevic; Krustins, Eduards; Noor, Nurulamin; O'Morain, Neil; Rodríguez-Lago, Iago; Scharl, Michael; Tua, Julia; Uzzan, Mathieu; Ali Yassin, Nuha; Baert, Filip; Langholz, Ebbe
2018-04-27
The 'United Registries for Clinical Assessment and Research' [UR-CARE] database is an initiative of the European Crohn's and Colitis Organisation [ECCO] to facilitate daily patient care and research studies in inflammatory bowel disease [IBD]. Herein, we sought to validate the database by using fictional case histories of patients with IBD that were to be entered by observers of varying experience in IBD. Nineteen observers entered five patient case histories into the database. After 6 weeks, all observers entered the same case histories again. For each case history, 20 key variables were selected to calculate the accuracy for each observer. We assumed that the database was such that ≥ 90% of the entered data would be correct. The overall proportion of correctly entered data was calculated using a beta-binomial regression model to account for inter-observer variation and compared to the expected level of validity. Re-test reliability was assessed using McNemar's test. For all case histories, the overall proportion of correctly entered items and their confidence intervals included the target of 90% (Case 1: 92% [88-94%]; Case 2: 87% [83-91%]; Case 3: 93% [90-95%]; Case 4: 97% [94-99%]; Case 5: 91% [87-93%]). These numbers did not differ significantly from those found 6 weeks later [NcNemar's test p > 0.05]. The UR-CARE database appears to be feasible, valid and reliable as a tool and easy to use regardless of prior user experience and level of clinical IBD experience. UR-CARE has the potential to enhance future European collaborations regarding clinical research in IBD.
Monitoring of services with non-relational databases and map-reduce framework
NASA Astrophysics Data System (ADS)
Babik, M.; Souto, F.
2012-12-01
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.
Letourneau, Nicole L; Tryphonopoulos, Panagiota D; Novick, Jason; Hart, J Martha; Giesbrecht, Gerald; Oxford, Monica L
Many nurses rely on the American Nursing Child Assessment Satellite Training (NCAST) Parent-Child Interaction (PCI) Teaching and Feeding Scales to identify and target interventions for families affected by severe/chronic stressors (e.g. postpartum depression (PPD), intimate partner violence (IPV), low-income). However, the NCAST Database that provides normative data for comparisons may not apply to Canadian families. The purpose of this study was to compare NCAST PCI scores in Canadian and American samples and to assess the reliability of the NCAST PCI Scales in Canadian samples. This secondary analysis employed independent samples t-tests (p < 0.005) to compare PCI between the American NCAST Database and Canadian high-risk (families with PPD, exposure to IPV or low-income) and community samples. Cronbach's alphas were calculated for the Canadian and American samples. In both American and Canadian samples, belonging to a high-risk population reduced parents' abilities to engage in sensitive and responsive caregiving (i.e. healthy serve and return relationships) as measured by the PCI Scales. NCAST Database mothers were more effective at executing caregiving responsibilities during PCI compared to the Canadian community sample, while infants belonging to the Canadian community sample provided clearer cues to caregivers during PCI compared to those of the NCAST Database. Internal consistency coefficients for the Canadian samples were generally acceptable. The NCAST Database can be reliably used for assessing PCI in normative and high-risk Canadian families. Canadian nurses can be assured that the PCI Scales adequately identify risks and can help target interventions to promote optimal parent-child relationships and ultimately child development. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves.
Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek
2015-07-21
There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.
Wang, Qi; Zhao, Xiao-Juan; Wang, Zi-Wei; Liu, Li; Wei, Yong-Xin; Han, Xiao; Zeng, Jing; Liao, Wan-Jin
2017-08-01
Rapid and precise identification of Cronobacter species is important for foodborne pathogen detection, however, commercial biochemical methods can only identify Cronobacter strains to genus level in most cases. To evaluate the power of mass spectrometry based on matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF MS) for Cronobacter species identification, 51 Cronobacter strains (eight reference and 43 wild strains) were identified by both MALDI-TOF MS and 16S rRNA gene sequencing. Biotyper RTC provided by Bruker identified all eight reference and 43 wild strains as Cronobacter species, which demonstrated the power of MALDI-TOF MS to identify Cronobacter strains to genus level. However, using the Bruker's database (6903 main spectra products) and Biotyper software, the MALDI-TOF MS analysis could not identify the investigated strains to species level. When MALDI-TOF MS analysis was performed using the combined in-house Cronobacter database and Bruker's database, bin setting, and unweighted pair group method with arithmetic mean (UPGMA) clustering, all the 51 strains were clearly identified into six Cronobacter species and the identification accuracy increased from 60% to 100%. We demonstrated that MALDI-TOF MS was reliable and easy-to-use for Cronobacter species identification and highlighted the importance of establishing a reliable database and improving the current data analysis methods by integrating the bin setting and UPGMA clustering. Copyright © 2017. Published by Elsevier B.V.
[Validation and adhesion to GESIDA quality indicators in patients with HIV infection].
Riera, Melchor; Esteban, Herminia; Suarez, Ignacio; Palacios, Rosario; Lozano, Fernando; Blanco, Jose R; Valencia, Eulalia; Ocampo, Antonio; Amador, Concha; Frontera, Guillem; vonWichmann-de Miguel, Miguel Angel
2016-01-01
The objective of the study is to validate the relevant GESIDA quality indicators for HIV infection, assessing the reliability, feasibility and adherence to them. The reliability was evaluated using the reproducibility of 6 indicators in peer review, with the second observer being an outsider. The feasibility and measurement of the level of adherence to the 22 indicators was conducted with annual fragmented retrospective collection of information from specific databases or the clinical charts of the nine participating hospitals. Reliability was very high, with interobserver agreement levels higher than 95% in 5 of the 6 indicators. The median time to achieve the indicators ranged between 5 and 600minutes, but could be achieved progressively from specific databases, enabling obtaining them automatically. As regards adherence to the indicators related with the initial evaluation of the patients, instructions and suitability of the guidelines for ART, adherence to ART, follow-up in clinics, and achieve an undetectable HIV by PCR at week 48 of the ART. Indicators of quality related to the prevention of opportunistic infections and control of comorbidities, the standards set were not achieved, and significant heterogeneity was observed between hospitals. The GESIDA quality indicators of HIV infection enabled the relevant indicators to be feasibly and reliably measured, and should be collected in all the units that care for patients with HIV infection. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
NASA Astrophysics Data System (ADS)
Knosp, B.; Gangl, M.; Hristova-Veleva, S. M.; Kim, R. M.; Li, P.; Turk, J.; Vu, Q. A.
2015-12-01
The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data and model forecast related to tropical cyclones. The TCIS has been running a near-real time (NRT) data portal during North Atlantic hurricane season that typically runs from June through October each year, since 2010. Data collected by the TCIS varies by type, format, contents, and frequency and is served to the user in two ways: (1) as image overlays on a virtual globe and (2) as derived output from a suite of analysis tools. In order to support these two functions, the data must be collected and then made searchable by criteria such as date, mission, product, pressure level, and geospatial region. Creating a database architecture that is flexible enough to manage, intelligently interrogate, and ultimately present this disparate data to the user in a meaningful way has been the primary challenge. The database solution for the TCIS has been to use a hybrid MySQL + Solr implementation. After testing other relational database and NoSQL solutions, such as PostgreSQL and MongoDB respectively, this solution has given the TCIS the best offerings in terms of query speed and result reliability. This database solution also supports the challenging (and memory overwhelming) geospatial queries that are necessary to support analysis tools requested by users. Though hardly new technologies on their own, our implementation of MySQL + Solr had to be customized and tuned to be able to accurately store, index, and search the TCIS data holdings. In this presentation, we will discuss how we arrived on our MySQL + Solr database architecture, why it offers us the most consistent fast and reliable results, and how it supports our front end so that we can offer users a look into our "big data" holdings.
BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation.
Dudek, Christian-Alexander; Dannheim, Henning; Schomburg, Dietmar
2017-01-01
The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de.
BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation
Schomburg, Dietmar
2017-01-01
The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de. PMID:28750104
Pontolillo, James; Eganhouse, R.P.
2001-01-01
The accurate determination of an organic contaminant?s physico-chemical properties is essential for predicting its environmental impact and fate. Approximately 700 publications (1944?2001) were reviewed and all known aqueous solubilities (Sw) and octanol-water partition coefficients (Kow) for the organochlorine pesticide, DDT, and its persistent metabolite, DDE were compiled and examined. Two problems are evident with the available database: 1) egregious errors in reporting data and references, and 2) poor data quality and/or inadequate documentation of procedures. The published literature (particularly the collative literature such as compilation articles and handbooks) is characterized by a preponderance of unnecessary data duplication. Numerous data and citation errors are also present in the literature. The percentage of original Sw and Kow data in compilations has decreased with time, and in the most recent publications (1994?97) it composes only 6?26 percent of the reported data. The variability of original DDT/DDE Sw and Kow data spans 2?4 orders of magnitude, and there is little indication that the uncertainty in these properties has declined over the last 5 decades. A criteria-based evaluation of DDT/DDE Sw and Kow data sources shows that 95?100 percent of the database literature is of poor or unevaluatable quality. The accuracy and reliability of the vast majority of the data are unknown due to inadequate documentation of the methods of determination used by the authors. [For example, estimates of precision have been reported for only 20 percent of experimental Sw data and 10 percent of experimental Kow data.] Computational methods for estimating these parameters have been increasingly substituted for direct or indirect experimental determination despite the fact that the data used for model development and validation may be of unknown reliability. Because of the prevalence of errors, the lack of methodological documentation, and unsatisfactory data quality, the reliability of the DDT/ DDE Sw and Kow database is questionable. The nature and extent of the errors documented in this study are probably indicative of a more general problem in the literature of hydrophobic organic compounds. Under these circumstances, estimation of critical environmental parameters on the basis of Sw and Kow (for example, bioconcentration factors, equilibrium partition coefficients) is inadvisable because it will likely lead to incorrect environmental risk assessments. The current state of the database indicates that much greater efforts are needed to: 1) halt the proliferation of erroneous data and references, 2) initiate a coordinated program to develop improved methods of property determination, 3) establish and maintain consistent reporting requirements for physico-chemical property data, and 4) create a mechanism for archiving reliable data for widespread use in the scientific/regulatory community.
Building An Integrated Neurodegenerative Disease Database At An Academic Health Center
Xie, Sharon X.; Baek, Young; Grossman, Murray; Arnold, Steven E.; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M.-Y.; Trojanowski, John Q.
2010-01-01
Background It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD). These comparative studies rely on powerful database tools to quickly generate data sets which match diverse and complementary criteria set by the studies. Methods In this paper, we present a novel Integrated NeuroDegenerative Disease (INDD) database developed at the University of Pennsylvania (Penn) through a consortium of Penn investigators. Since these investigators work on AD, PD, ALS and FTLD, this allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used Microsoft SQL Server as the platform with built-in “backwards” functionality to provide Access as a front-end client to interface with the database. We used PHP hypertext Preprocessor to create the “front end” web interface and then integrated individual neurodegenerative disease databases using a master lookup table. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Results We compare the results of a biomarker study using the INDD database to those using an alternative approach by querying individual database separately. Conclusions We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies across several neurodegenerative diseases. PMID:21784346
Padliya, Neerav D; Garrett, Wesley M; Campbell, Kimberly B; Tabb, David L; Cooper, Bret
2007-11-01
LC-MS/MS has demonstrated potential for detecting plant pathogens. Unlike PCR or ELISA, LC-MS/MS does not require pathogen-specific reagents for the detection of pathogen-specific proteins and peptides. However, the MS/MS approach we and others have explored does require a protein sequence reference database and database-search software to interpret tandem mass spectra. To evaluate the limitations of database composition on pathogen identification, we analyzed proteins from cultured Ustilago maydis, Phytophthora sojae, Fusarium graminearum, and Rhizoctonia solani by LC-MS/MS. When the search database did not contain sequences for a target pathogen, or contained sequences to related pathogens, target pathogen spectra were reliably matched to protein sequences from nontarget organisms, giving an illusion that proteins from nontarget organisms were identified. Our analysis demonstrates that when database-search software is used as part of the identification process, a paradox exists whereby additional sequences needed to detect a wide variety of possible organisms may lead to more cross-species protein matches and misidentification of pathogens.
DOT National Transportation Integrated Search
2005-02-01
This report presents STRUCTVIEW, a vehicle-based system for the measurement of roadway structure proles, which uses a scanning laser rangender to measure various structure and roadway features while traveling at highway speed. Measurement capab...
Building a Database of Developmental Neurotoxitants: Evidence from Human and Animal Studies
EPA’s program for the screening and prioritization of chemicals for developmental neurotoxicity (DNT) necessitates the generation of a list of chemicals that are known mammalian developmental neurotoxicants. This chemical list will be used to evaluate the sensitivity, reliability...
Competitive strategy a new era.
Zuckerman, Alan M
2007-11-01
By adopting five basic practices, your organization will be ready to advance to the next level of competitive fitness: Develop a reliable financial baseline. Insist on development of a competitive intelligence database system. Employ rigorous business planning. Advocate for focus and discipline. Really commit to competing.
NASA Astrophysics Data System (ADS)
Cavallari, Francesca; de Gruttola, Michele; Di Guida, Salvatore; Govi, Giacomo; Innocente, Vincenzo; Pfeiffer, Andreas; Pierro, Antonio
2011-12-01
Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are "dropped" by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications.In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.
Searching fee and non-fee toxicology information resources: an overview of selected databases.
Wright, L L
2001-01-12
Toxicology profiles organize information by broad subjects, the first of which affirms identity of the agent studied. Studies here show two non-fee databases (ChemFinder and ChemIDplus) verify the identity of compounds with high efficiency (63% and 73% respectively) with the fee-based Chemical Abstracts Registry file serving well to fill data gaps (100%). Continued searching proceeds using knowledge of structure, scope and content to select databases. Valuable sources for information are factual databases that collect data and facts in special subject areas organized in formats available for analysis or use. Some sources representative of factual files are RTECS, CCRIS, HSDB, GENE-TOX and IRIS. Numerous factual databases offer a wealth of reliable information; however, exhaustive searches probe information published in journal articles and/or technical reports with records residing in bibliographic databases such as BIOSIS, EMBASE, MEDLINE, TOXLINE and Web of Science. Listed with descriptions are numerous factual and bibliographic databases supplied by 11 producers. Given the multitude of options and resources, it is often necessary to seek service desk assistance. Questions were posed by telephone and e-mail to service desks at DIALOG, ISI, MEDLARS, Micromedex and STN International. Results of the survey are reported.
ARACHNID: A prototype object-oriented database tool for distributed systems
NASA Technical Reports Server (NTRS)
Younger, Herbert; Oreilly, John; Frogner, Bjorn
1994-01-01
This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.
Austvoll-Dahlgren, Astrid; Guttersrud, Øystein; Nsangi, Allen; Semakula, Daniel; Oxman, Andrew D
2017-05-25
The Claim Evaluation Tools database contains multiple-choice items for measuring people's ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable. To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis. We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis. Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty. Most of the items conformed well to the Rasch model's expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
FY11 Facility Assessment Study for Aeronautics Test Program
NASA Technical Reports Server (NTRS)
Loboda, John A.; Sydnor, George H.
2013-01-01
This paper presents the approach and results for the Aeronautics Test Program (ATP) FY11 Facility Assessment Project. ATP commissioned assessments in FY07 and FY11 to aid in the understanding of the current condition and reliability of its facilities and their ability to meet current and future (five year horizon) test requirements. The principle output of the assessment was a database of facility unique, prioritized investments projects with budgetary cost estimates. This database was also used to identify trends for the condition of facility systems.
On the Development of Speech Resources for the Mixtec Language
2013-01-01
The Mixtec language is one of the main native languages in Mexico. In general, due to urbanization, discrimination, and limited attempts to promote the culture, the native languages are disappearing. Most of the information available about the Mixtec language is in written form as in dictionaries which, although including examples about how to pronounce the Mixtec words, are not as reliable as listening to the correct pronunciation from a native speaker. Formal acoustic resources, as speech corpora, are almost non-existent for the Mixtec, and no speech technologies are known to have been developed for it. This paper presents the development of the following resources for the Mixtec language: (1) a speech database of traditional narratives of the Mixtec culture spoken by a native speaker (labelled at the phonetic and orthographic levels by means of spectral analysis) and (2) a native speaker-adaptive automatic speech recognition (ASR) system (trained with the speech database) integrated with a Mixtec-to-Spanish/Spanish-to-Mixtec text translator. The speech database, although small and limited to a single variant, was reliable enough to build the multiuser speech application which presented a mean recognition/translation performance up to 94.36% in experiments with non-native speakers (the target users). PMID:23710134
Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne
2016-01-01
Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication. The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis.
Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne
2016-01-01
Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication. The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis. PMID:27635223
T3SEdb: data warehousing of virulence effectors secreted by the bacterial Type III Secretion System.
Tay, Daniel Ming Ming; Govindarajan, Kunde Ramamoorthy; Khan, Asif M; Ong, Terenze Yao Rui; Samad, Hanif M; Soh, Wei Wei; Tong, Minyan; Zhang, Fan; Tan, Tin Wee
2010-10-15
Effectors of Type III Secretion System (T3SS) play a pivotal role in establishing and maintaining pathogenicity in the host and therefore the identification of these effectors is important in understanding virulence. However, the effectors display high level of sequence diversity, therefore making the identification a difficult process. There is a need to collate and annotate existing effector sequences in public databases to enable systematic analyses of these sequences for development of models for screening and selection of putative novel effectors from bacterial genomes that can be validated by a smaller number of key experiments. Herein, we present T3SEdb http://effectors.bic.nus.edu.sg/T3SEdb, a specialized database of annotated T3SS effector (T3SE) sequences containing 1089 records from 46 bacterial species compiled from the literature and public protein databases. Procedures have been defined for i) comprehensive annotation of experimental status of effectors, ii) submission and curation review of records by users of the database, and iii) the regular update of T3SEdb existing and new records. Keyword fielded and sequence searches (BLAST, regular expression) are supported for both experimentally verified and hypothetical T3SEs. More than 171 clusters of T3SEs were detected based on sequence identity comparisons (intra-cluster difference up to ~60%). Owing to this high level of sequence diversity of T3SEs, the T3SEdb provides a large number of experimentally known effector sequences with wide species representation for creation of effector predictors. We created a reliable effector prediction tool, integrated into the database, to demonstrate the application of the database for such endeavours. T3SEdb is the first specialised database reported for T3SS effectors, enriched with manual annotations that facilitated systematic construction of a reliable prediction model for identification of novel effectors. The T3SEdb represents a platform for inclusion of additional annotations of metadata for future developments of sophisticated effector prediction models for screening and selection of putative novel effectors from bacterial genomes/proteomes that can be validated by a small number of key experiments.
The Application and Future of Big Database Studies in Cardiology: A Single-Center Experience.
Lee, Kuang-Tso; Hour, Ai-Ling; Shia, Ben-Chang; Chu, Pao-Hsien
2017-11-01
As medical research techniques and quality have improved, it is apparent that cardiovascular problems could be better resolved by more strict experiment design. In fact, substantial time and resources should be expended to fulfill the requirements of high quality studies. Many worthy ideas and hypotheses were unable to be verified or proven due to ethical or economic limitations. In recent years, new and various applications and uses of databases have received increasing attention. Important information regarding certain issues such as rare cardiovascular diseases, women's heart health, post-marketing analysis of different medications, or a combination of clinical and regional cardiac features could be obtained by the use of rigorous statistical methods. However, there are limitations that exist among all databases. One of the key essentials to creating and correctly addressing this research is through reliable processes of analyzing and interpreting these cardiologic databases.
BioQ: tracing experimental origins in public genomic databases using a novel data provenance model.
Saccone, Scott F; Quan, Jiaxi; Jones, Peter L
2012-04-15
Public genomic databases, which are often used to guide genetic studies of human disease, are now being applied to genomic medicine through in silico integrative genomics. These databases, however, often lack tools for systematically determining the experimental origins of the data. We introduce a new data provenance model that we have implemented in a public web application, BioQ, for assessing the reliability of the data by systematically tracing its experimental origins to the original subjects and biologics. BioQ allows investigators to both visualize data provenance as well as explore individual elements of experimental process flow using precise tools for detailed data exploration and documentation. It includes a number of human genetic variation databases such as the HapMap and 1000 Genomes projects. BioQ is freely available to the public at http://bioq.saclab.net.
Concepts and data model for a co-operative neurovascular database.
Mansmann, U; Taylor, W; Porter, P; Bernarding, J; Jäger, H R; Lasjaunias, P; Terbrugge, K; Meisel, J
2001-08-01
Problems of clinical management of neurovascular diseases are very complex. This is caused by the chronic character of the diseases, a long history of symptoms and diverse treatments. If patients are to benefit from treatment, then treatment decisions have to rely on reliable and accurate knowledge of the natural history of the disease and the various treatments. Recent developments in statistical methodology and experience from electronic patient records are used to establish an information infrastructure based on a centralized register. A protocol to collect data on neurovascular diseases with technical as well as logistical aspects of implementing a database for neurovascular diseases are described. The database is designed as a co-operative tool of audit and research available to co-operating centres. When a database is linked to a systematic patient follow-up, it can be used to study prognosis. Careful analysis of patient outcome is valuable for decision-making.
A Computational Chemistry Database for Semiconductor Processing
NASA Technical Reports Server (NTRS)
Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)
1998-01-01
The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.
NASA Technical Reports Server (NTRS)
Thomas, J. M.; Hanagud, S.
1974-01-01
The design criteria and test options for aerospace structural reliability were investigated. A decision methodology was developed for selecting a combination of structural tests and structural design factors. The decision method involves the use of Bayesian statistics and statistical decision theory. Procedures are discussed for obtaining and updating data-based probabilistic strength distributions for aerospace structures when test information is available and for obtaining subjective distributions when data are not available. The techniques used in developing the distributions are explained.
NASA Aerospace Flight Battery Systems Program Update
NASA Technical Reports Server (NTRS)
Manzo, Michelle; ODonnell, Patricia
1997-01-01
The objectives of NASA's Aerospace Flight Battery Systems Program is to: develop, maintain and provide tools for the validation and assessment of aerospace battery technologies; accelerate the readiness of technology advances and provide infusion paths for emerging technologies; provide NASA projects with the required database and validation guidelines for technology selection of hardware and processes relating to aerospace batteries; disseminate validation and assessment tools, quality assurance, reliability, and availability information to the NASA and aerospace battery communities; and ensure that safe, reliable batteries are available for NASA's future missions.
NASA Astrophysics Data System (ADS)
Yin, Lucy; Andrews, Jennifer; Heaton, Thomas
2018-05-01
Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.
E-MSD: an integrated data resource for bioinformatics.
Velankar, S; McNeil, P; Mittard-Runte, V; Suarez, A; Barrell, D; Apweiler, R; Henrick, K
2005-01-01
The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the 'Structure Integration with Function, Taxonomy and Sequences (SIFTS)' initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group.
Re-thinking organisms: The impact of databases on model organism biology.
Leonelli, Sabina; Ankeny, Rachel A
2012-03-01
Community databases have become crucial to the collection, ordering and retrieval of data gathered on model organisms, as well as to the ways in which these data are interpreted and used across a range of research contexts. This paper analyses the impact of community databases on research practices in model organism biology by focusing on the history and current use of four community databases: FlyBase, Mouse Genome Informatics, WormBase and The Arabidopsis Information Resource. We discuss the standards used by the curators of these databases for what counts as reliable evidence, acceptable terminology, appropriate experimental set-ups and adequate materials (e.g., specimens). On the one hand, these choices are informed by the collaborative research ethos characterising most model organism communities. On the other hand, the deployment of these standards in databases reinforces this ethos and gives it concrete and precise instantiations by shaping the skills, practices, values and background knowledge required of the database users. We conclude that the increasing reliance on community databases as vehicles to circulate data is having a major impact on how researchers conduct and communicate their research, which affects how they understand the biology of model organisms and its relation to the biology of other species. Copyright © 2011 Elsevier Ltd. All rights reserved.
77 FR 12207 - Pyroxasulfone; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-29
... and post- natal toxicity database for pyroxasulfone includes developmental toxicity studies in rats... study and developmental toxicity study in rabbits following in utero or post-natal exposure to... reliability as well as the relationship of the results of the studies to human risk. EPA has also considered...
Problem Solving with Guided Repeated Oral Reading Instruction
ERIC Educational Resources Information Center
Conderman, Greg; Strobel, Debra
2006-01-01
Many students with disabilities require specialized instructional interventions and frequent progress monitoring in reading. The guided repeated oral reading technique promotes oral reading fluency while providing a reliable data-based monitoring system. This article emphasizes the importance of problem-solving when using this reading approach.
A Vision System For A Mars Rover
NASA Astrophysics Data System (ADS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1987-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
A vision system for a Mars rover
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1988-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templin-Branner, Wilma
The purpose of this training is to familiarize participants with reliable online environmental health and toxicology information, from the National Library of Medicine and other reliable sources. Skills and knowledge acquired in this training class will enable participants to access, utilize, and refer others to environmental health and toxicology information. After completing this course, participants will be able to: (1) Identify quality, accurate, and authoritative online resources pertaining to environmental health, toxicology, and related medical information; (2) Demonstrate the ability to perform strategic search techniques to find relevant online information; and (3) Apply the skills and knowledge obtained in thismore » class to their organization's health information needs. NLMs TOXNET (Toxicology Data Network) is a free, Web-based system of databases on toxicology, environmental health, hazardous chemicals, toxic releases, chemical nomenclatures, and specialty areas such as occupational health and consumer products. Types of information in the TOXNET databases include: (1) Specific chemicals, mixtures, and products; (2) Unknown chemicals; and (3) Special toxic effects of chemicals in humans and/or animals.« less
Operations and Modeling Analysis
NASA Technical Reports Server (NTRS)
Ebeling, Charles
2005-01-01
The Reliability and Maintainability Analysis Tool (RMAT) provides NASA the capability to estimate reliability and maintainability (R&M) parameters and operational support requirements for proposed space vehicles based upon relationships established from both aircraft and Shuttle R&M data. RMAT has matured both in its underlying database and in its level of sophistication in extrapolating this historical data to satisfy proposed mission requirements, maintenance concepts and policies, and type of vehicle (i.e. ranging from aircraft like to shuttle like). However, a companion analyses tool, the Logistics Cost Model (LCM) has not reached the same level of maturity as RMAT due, in large part, to nonexistent or outdated cost estimating relationships and underlying cost databases, and it's almost exclusive dependence on Shuttle operations and logistics cost input parameters. As a result, the full capability of the RMAT/LCM suite of analysis tools to take a conceptual vehicle and derive its operations and support requirements along with the resulting operating and support costs has not been realized.
Life Cycle Assessment for desalination: a review on methodology feasibility and reliability.
Zhou, Jin; Chang, Victor W-C; Fane, Anthony G
2014-09-15
As concerns of natural resource depletion and environmental degradation caused by desalination increase, research studies of the environmental sustainability of desalination are growing in importance. Life Cycle Assessment (LCA) is an ISO standardized method and is widely applied to evaluate the environmental performance of desalination. This study reviews more than 30 desalination LCA studies since 2000s and identifies two major issues in need of improvement. The first is feasibility, covering three elements that support the implementation of the LCA to desalination, including accounting methods, supporting databases, and life cycle impact assessment approaches. The second is reliability, addressing three essential aspects that drive uncertainty in results, including the incompleteness of the system boundary, the unrepresentativeness of the database, and the omission of uncertainty analysis. This work can serve as a preliminary LCA reference for desalination specialists, but will also strengthen LCA as an effective method to evaluate the environment footprint of desalination alternatives. Copyright © 2014 Elsevier Ltd. All rights reserved.
Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.
Chen, Jing; Zhang, Yi; Xue, Wei
2018-04-28
In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.
Rabelo, Michelle; Nunes, Guilherme S; da Costa Amante, Natália Menezes; de Noronha, Marcos; Fachin-Martins, Emerson
2016-02-01
Muscle weakness is the main cause of motor impairment among stroke survivors and is associated with reduced peak muscle torque. To systematically investigate and organize the evidence of the reliability of muscle strength evaluation measures in post-stroke survivors with chronic hemiparesis. Two assessors independently searched four electronic databases in January 2014 (Medline, Scielo, CINAHL, Embase). Inclusion criteria comprised studies on reliability on muscle strength assessment in adult post-stroke patients with chronic hemiparesis. We extracted outcomes from included studies about reliability data, measured by intraclass correlation coefficient (ICC) and/or similar. The meta-analyses were conducted only with isokinetic data. Of 450 articles, eight articles were included for this review. After quality analysis, two studies were considered of high quality. Five different joints were analyzed within the included studies (knee, hip, ankle, shoulder, and elbow). Their reliability results varying from low to very high reliability (ICCs from 0.48 to 0.99). Results of meta-analysis for knee extension varying from high to very high reliability (pooled ICCs from 0.89 to 0.97), for knee flexion varying from high to very high reliability (pooled ICCs from 0.84 to 0.91) and for ankle plantar flexion showed high reliability (pooled ICC = 0.85). Objective muscle strength assessment can be reliably used in lower and upper extremities in post-stroke patients with chronic hemiparesis.
The reliability paradox of the Parent-Child Conflict Tactics Corporal Punishment Subscale.
Lorber, Michael F; Slep, Amy M Smith
2018-02-01
In the present investigation we consider and explain an apparent paradox in the measurement of corporal punishment with the Parent-Child Conflict Tactics Scale (CTS-PC): How can it have poor internal consistency and still be reliable? The CTS-PC was administered to a community sample of 453 opposite sex couples who were parents of 3- to 7-year-old children. Internal consistency was marginal, yet item response theory analyses revealed that reliability rose sharply with increasing corporal punishment, exceeding .80 in the upper ranges of the construct. The results suggest that the CTS-PC Corporal Punishment subscale reliably discriminates among parents who report average to high corporal punishment (64% of mothers and 56% of fathers in the present sample), despite low overall internal consistency. These results have straightforward implications for the use and reporting of the scale. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Building an integrated neurodegenerative disease database at an academic health center.
Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q
2011-07-01
It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves
Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek
2015-01-01
Background There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Methods Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). Results The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design. PMID:26197321
Assessing the number of fire fatalities in a defined population.
Jonsson, Anders; Bergqvist, Anders; Andersson, Ragnar
2015-12-01
Fire-related fatalities and injuries have become a growing governmental concern in Sweden, and a national vision zero strategy has been adopted stating that nobody should get killed or seriously injured from fires. There is considerable uncertainty, however, regarding the numbers of both deaths and injuries due to fires. Different national sources present different numbers, even on deaths, which obstructs reliable surveillance of the problem over time. We assume the situation is similar in other countries. This study seeks to assess the true number of fire-related deaths in Sweden by combining sources, and to verify the coverage of each individual source. By doing so, we also wish to demonstrate the possibilities of improved surveillance practices. Data from three national sources were collected and matched; a special database on fatal fires held by The Swedish Contingencies Agency (nationally responsible for fire prevention), a database on forensic medical examinations held by the National Board of Forensic Medicine, and the cause of death register held by the Swedish National Board of Health and Welfare. The results disclose considerable underreporting in the single sources. The national database on fatal fires, serving as the principal source for policy making on fire prevention matters, underestimates the true situation by 20%. Its coverage of residential fires appears to be better than other fires. Systematic safety work and informed policy-making presuppose access to correct and reliable numbers. By combining several different sources, as suggested in this study, the national database on fatal fires is now considerably improved and includes regular matching with complementary sources.
DOT National Transportation Integrated Search
2006-03-01
There have been several studies that have investigated interactions between light and heavy vehicles. These have primarily consisted of crash database analyses where Police Accident Reports have been studied. These approaches are generally reliable, ...
Prevalence and Effects of Child Exposure to Domestic Violence.
ERIC Educational Resources Information Center
Fantuzzo, John W.; Mohr, Wanda K.
1999-01-01
Discusses the limitations of current databases documenting the prevalence and effects of child exposure to domestic violence and describes a model for the collection of reliable and valid prevalence data, the Spousal Assault Replication Program, which uses data collected by the police and university researchers. (SLD)
CICS Region Virtualization for Cost Effective Application Development
ERIC Educational Resources Information Center
Khan, Kamal Waris
2012-01-01
Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…
Cygnus Performance in Subcritical Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Corrow, M. Hansen, D. Henderson, S. Lutz, C. Mitton, et al.
2008-02-01
The Cygnus Dual Beam Radiographic Facility consists of two identical radiographic sources with the following specifications: 4-rad dose at 1 m, 1-mm spot size, 50-ns pulse length, 2.25-MeV endpoint energy. The facility is located in an underground tunnel complex at the Nevada Test Site. Here SubCritical Experiments (SCEs) are performed to study the dynamic properties of plutonium. The Cygnus sources were developed as a primary diagnostic for these tests. Since SCEs are single-shot, high-value events - reliability and reproducibility are key issues. Enhanced reliability involves minimization of failure modes through design, inspection, and testing. Many unique hardware and operational featuresmore » were incorporated into Cygnus to insure reliability. Enhanced reproducibility involves normalization of shot-to-shot output also through design, inspection, and testing. The first SCE to utilize Cygnus, Armando, was executed on May 25, 2004. A year later, April - May 2005, calibrations using a plutonium step wedge were performed. The results from this series were used for more precise interpretation of the Armando data. In the period February - May 2007 Cygnus was fielded on Thermos, which is a series of small-sample plutonium shots using a one-dimensional geometry. Pulsed power research generally dictates frequent change in hardware configuration. Conversely, SCE applications have typically required constant machine settings. Therefore, while operating during the past four years we have accumulated a large database for evaluation of machine performance under highly consistent operating conditions. Through analysis of this database Cygnus reliability and reproducibility on Armando, Step Wedge, and Thermos is presented.« less
Sana, Theodore R; Roark, Joseph C; Li, Xiangdong; Waddell, Keith; Fischer, Steven M
2008-09-01
In an effort to simplify and streamline compound identification from metabolomics data generated by liquid chromatography time-of-flight mass spectrometry, we have created software for constructing Personalized Metabolite Databases with content from over 15,000 compounds pulled from the public METLIN database (http://metlin.scripps.edu/). Moreover, we have added extra functionalities to the database that (a) permit the addition of user-defined retention times as an orthogonal searchable parameter to complement accurate mass data; and (b) allow interfacing to separate software, a Molecular Formula Generator (MFG), that facilitates reliable interpretation of any database matches from the accurate mass spectral data. To test the utility of this identification strategy, we added retention times to a subset of masses in this database, representing a mixture of 78 synthetic urine standards. The synthetic mixture was analyzed and screened against this METLIN urine database, resulting in 46 accurate mass and retention time matches. Human urine samples were subsequently analyzed under the same analytical conditions and screened against this database. A total of 1387 ions were detected in human urine; 16 of these ions matched both accurate mass and retention time parameters for the 78 urine standards in the database. Another 374 had only an accurate mass match to the database, with 163 of those masses also having the highest MFG score. Furthermore, MFG calculated a formula for a further 849 ions that had no match to the database. Taken together, these results suggest that the METLIN Personal Metabolite database and MFG software offer a robust strategy for confirming the formula of database matches. In the event of no database match, it also suggests possible formulas that may be helpful in interpreting the experimental results.
Field reliability of competency and sanity opinions: A systematic review and meta-analysis.
Guarnera, Lucy A; Murrie, Daniel C
2017-06-01
We know surprisingly little about the interrater reliability of forensic psychological opinions, even though courts and other authorities have long called for known error rates for scientific procedures admitted as courtroom testimony. This is particularly true for opinions produced during routine practice in the field, even for some of the most common types of forensic evaluations-evaluations of adjudicative competency and legal sanity. To address this gap, we used meta-analytic procedures and study space methodology to systematically review studies that examined the interrater reliability-particularly the field reliability-of competency and sanity opinions. Of 59 identified studies, 9 addressed the field reliability of competency opinions and 8 addressed the field reliability of sanity opinions. These studies presented a wide range of reliability estimates; pairwise percentage agreements ranged from 57% to 100% and kappas ranged from .28 to 1.0. Meta-analytic combinations of reliability estimates obtained by independent evaluators returned estimates of κ = .49 (95% CI: .40-.58) for competency opinions and κ = .41 (95% CI: .29-.53) for sanity opinions. This wide range of reliability estimates underscores the extent to which different evaluation contexts tend to produce different reliability rates. Unfortunately, our study space analysis illustrates that available field reliability studies typically provide little information about contextual variables crucial to understanding their findings. Given these concerns, we offer suggestions for improving research on the field reliability of competency and sanity opinions, as well as suggestions for improving reliability rates themselves. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Generalized entropies and the similarity of texts
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; Dias, Laércio; Gerlach, Martin
2017-01-01
We show how generalized Gibbs-Shannon entropies can provide new insights on the statistical properties of texts. The universal distribution of word frequencies (Zipf’s law) implies that the generalized entropies, computed at the word level, are dominated by words in a specific range of frequencies. Here we show that this is the case not only for the generalized entropies but also for the generalized (Jensen-Shannon) divergences, used to compute the similarity between different texts. This finding allows us to identify the contribution of specific words (and word frequencies) for the different generalized entropies and also to estimate the size of the databases needed to obtain a reliable estimation of the divergences. We test our results in large databases of books (from the google n-gram database) and scientific papers (indexed by Web of Science).
Associative memory model for searching an image database by image snippet
NASA Astrophysics Data System (ADS)
Khan, Javed I.; Yun, David Y.
1994-09-01
This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.
Cuchna, Jennifer W; Hoch, Matthew C; Hoch, Johanna M
2016-05-01
To synthesize the literature and perform a meta-analysis for both the interrater and intrarater reliability of the FMS™. Academic Search Complete, CINAHL, Medline and SportsDiscus databases were systematically searched from inception to March 2015. Studies were included if the primary purpose was to determine the interrater or intrarater reliability of the FMS™, assessed and scored all 7-items using the standard scoring criteria, provided a composite score and employed intraclass correlation coefficients (ICCs). Studies were excluded if reliability was not the primary aim, participants were injured at data collection, or a modified FMS™ or scoring system was utilized. Seven papers were included; 6 assessing interrater and 6 assessing intrarater reliability. There was moderate evidence in good interrater reliability with a summary ICC of 0.843 (95% CI = 0.640, 0.936; Q7 = 84.915, p < 0.0001). There was moderate evidence in good intrarater reliability with a summary ICC of 0.869 (95% CI = 0.785, 0.921; Q12 = 60.763, p < 0.0001). There was moderate evidence for both forms of reliability. The sensitivity assessments revealed this interpretation is stable and not influenced by any one study. Overall, the FMS™ is a reliable tool for clinical practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
NCBI2RDF: enabling full RDF-based access to NCBI databases.
Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor
2013-01-01
RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments.
Shuttle-Data-Tape XML Translator
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buche, D. L.
This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.
Allometric scaling theory applied to FIA biomass estimation
David C. Chojnacky
2002-01-01
Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...
Measures of Searcher Performance: A Psychometric Evaluation.
ERIC Educational Resources Information Center
Wildemuth, Barbara M.; And Others
1993-01-01
Describes a study of medical students that was conducted to evaluate measures of performance on factual searches of INQUIRER, a full-text database in microbiology. Measures relating to recall, precision, search term overlap, and efficiency are discussed; reliability and construct validity are considered; and implications for future research are…
Measures of Self-Care Independence for Children with Osteochondrodysplasia: A Clinimetric Review
ERIC Educational Resources Information Center
Ireland, Penelope; Johnston, Leanne M.
2012-01-01
This systematic review evaluates the validity, reliability, and clinical utility of outcome measures used to assess self-care skills among children with congenital musculoskeletal conditions and assesses the applicability of these measures for children with osteochondrodysplasia aged 0-12 years. Electronic databases were searched to identify…
Sharper Standards for Fund-Raising Integrity
ERIC Educational Resources Information Center
Peterson, Vance T.
2004-01-01
Recently, the Council for Advancement and Support of Education (CASE) revised its standards for annual gifts and campaigns. The purpose was to erase ambiguities, establish tighter definitions for certain types of gifts, and ensure greater clarity about best management practices. In addition, the effort created a reliable database of campaign…
Self-Report Measures of Juvenile Psychopathic Personality Traits: A Comparative Review
ERIC Educational Resources Information Center
Vaughn, Michael G.; Howard, Matthew O.
2005-01-01
The authors evaluated self-report instruments currently being used to assess children and adolescents with psychopathic personality traits with respect to their reliability, validity, and research utility. Comprehensive searches across multiple computerized bibliographic databases were conducted and supplemented with manual searches. A total of 30…
Content-based video indexing and searching with wavelet transformation
NASA Astrophysics Data System (ADS)
Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah
2006-05-01
Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.
Temperature Dependence of Mineral Solubility in Water. Part 3. Alkaline and Alkaline Earth Sulfates
NASA Astrophysics Data System (ADS)
Krumgalz, B. S.
2018-06-01
The databases of alkaline and alkaline earth sulfate solubilities in water at various temperatures were created using experimental data from the publications over about the last two centuries. Statistical critical evaluation of the created databases was produced since there were enough independent data sources to justify such evaluation. The reliable experimental data were adequately described by polynomial expressions over various temperature ranges. Using the Pitzer approach for ionic activity and osmotic coefficients, the thermodynamic solubility products for the discussed minerals have been calculated at various temperatures and represented by polynomial expressions.
Temperature Dependence of Mineral Solubility in Water. Part 2. Alkaline and Alkaline Earth Bromides
NASA Astrophysics Data System (ADS)
Krumgalz, B. S.
2018-03-01
Databases of alkaline and alkaline earth bromide solubilities in water at various temperatures were created using experimental data from publications over about the last two centuries. Statistical critical evaluation of the created databases was produced since there were enough independent data sources to justify such evaluation. The reliable experimental data were adequately described by polynomial expressions over various temperature ranges. Using the Pitzer approach for ionic activity and osmotic coefficients, the thermodynamic solubility products for the discussed bromide minerals have been calculated at various temperature intervals and also represented by polynomial expressions.
Strategy for a transparent, accessible, and sustainable national claims database.
Gelburd, Robin
2015-03-01
The article outlines the strategy employed by FAIR Health, Inc, an independent nonprofit, to maintain a national database of over 18 billion private health insurance claims to support consumer education, payer and provider operations, policy makers, and researchers with standard and customized data sets on an economically self-sufficient basis. It explains how FAIR Health conducts all operations in-house, including data collection, security, validation, information organization, product creation, and transmission, with a commitment to objectivity and reliability in data and data products. It also describes the data elements available to researchers and the diverse studies that FAIR Health data facilitate.
Characterizing the genetic structure of a forensic DNA database using a latent variable approach.
Kruijver, Maarten
2016-07-01
Several problems in forensic genetics require a representative model of a forensic DNA database. Obtaining an accurate representation of the offender database can be difficult, since databases typically contain groups of persons with unregistered ethnic origins in unknown proportions. We propose to estimate the allele frequencies of the subpopulations comprising the offender database and their proportions from the database itself using a latent variable approach. We present a model for which parameters can be estimated using the expectation maximization (EM) algorithm. This approach does not rely on relatively small and possibly unrepresentative population surveys, but is driven by the actual genetic composition of the database only. We fit the model to a snapshot of the Dutch offender database (2014), which contains close to 180,000 profiles, and find that three subpopulations suffice to describe a large fraction of the heterogeneity in the database. We demonstrate the utility and reliability of the approach with three applications. First, we use the model to predict the number of false leads obtained in database searches. We assess how well the model predicts the number of false leads obtained in mock searches in the Dutch offender database, both for the case of familial searching for first degree relatives of a donor and searching for contributors to three-person mixtures. Second, we study the degree of partial matching between all pairs of profiles in the Dutch database and compare this to what is predicted using the latent variable approach. Third, we use the model to provide evidence to support that the Dutch practice of estimating match probabilities using the Balding-Nichols formula with a native Dutch reference database and θ=0.03 is conservative. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
iRefWeb: interactive analysis of consolidated protein interaction data and their supporting evidence
Turner, Brian; Razick, Sabry; Turinsky, Andrei L.; Vlasblom, James; Crowdy, Edgard K.; Cho, Emerson; Morrison, Kyle; Wodak, Shoshana J.
2010-01-01
We present iRefWeb, a web interface to protein interaction data consolidated from 10 public databases: BIND, BioGRID, CORUM, DIP, IntAct, HPRD, MINT, MPact, MPPI and OPHID. iRefWeb enables users to examine aggregated interactions for a protein of interest, and presents various statistical summaries of the data across databases, such as the number of organism-specific interactions, proteins and cited publications. Through links to source databases and supporting evidence, researchers may gauge the reliability of an interaction using simple criteria, such as the detection methods, the scale of the study (high- or low-throughput) or the number of cited publications. Furthermore, iRefWeb compares the information extracted from the same publication by different databases, and offers means to follow-up possible inconsistencies. We provide an overview of the consolidated protein–protein interaction landscape and show how it can be automatically cropped to aid the generation of meaningful organism-specific interactomes. iRefWeb can be accessed at: http://wodaklab.org/iRefWeb. Database URL: http://wodaklab.org/iRefWeb/ PMID:20940177
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Experience with ATLAS MySQL PanDA database service
NASA Astrophysics Data System (ADS)
Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.
2010-04-01
The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.
DB Dehydrogenase: an online integrated structural database on enzyme dehydrogenase.
Nandy, Suman Kumar; Bhuyan, Rajabrata; Seal, Alpana
2012-01-01
Dehydrogenase enzymes are almost inevitable for metabolic processes. Shortage or malfunctioning of dehydrogenases often leads to several acute diseases like cancers, retinal diseases, diabetes mellitus, Alzheimer, hepatitis B & C etc. With advancement in modern-day research, huge amount of sequential, structural and functional data are generated everyday and widens the gap between structural attributes and its functional understanding. DB Dehydrogenase is an effort to relate the functionalities of dehydrogenase with its structures. It is a completely web-based structural database, covering almost all dehydrogenases [~150 enzyme classes, ~1200 entries from ~160 organisms] whose structures are known. It is created by extracting and integrating various online resources to provide the true and reliable data and implemented by MySQL relational database through user friendly web interfaces using CGI Perl. Flexible search options are there for data extraction and exploration. To summarize, sequence, structure, function of all dehydrogenases in one place along with the necessary option of cross-referencing; this database will be utile for researchers to carry out further work in this field. The database is available for free at http://www.bifku.in/DBD/
Building a Database for a Quantitative Model
NASA Technical Reports Server (NTRS)
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
E-MSD: an integrated data resource for bioinformatics
Velankar, S.; McNeil, P.; Mittard-Runte, V.; Suarez, A.; Barrell, D.; Apweiler, R.; Henrick, K.
2005-01-01
The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the ‘Structure Integration with Function, Taxonomy and Sequences (SIFTS)’ initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group. PMID:15608192
Application of China's National Forest Continuous Inventory database.
Xie, Xiaokui; Wang, Qingli; Dai, Limin; Su, Dongkai; Wang, Xinchuang; Qi, Guang; Ye, Yujing
2011-12-01
The maintenance of a timely, reliable and accurate spatial database on current forest ecosystem conditions and changes is essential to characterize and assess forest resources and support sustainable forest management. Information for such a database can be obtained only through a continuous forest inventory. The National Forest Continuous Inventory (NFCI) is the first level of China's three-tiered inventory system. The NFCI is administered by the State Forestry Administration; data are acquired by five inventory institutions around the country. Several important components of the database include land type, forest classification and ageclass/ age-group. The NFCI database in China is constructed based on 5-year inventory periods, resulting in some of the data not being timely when reports are issued. To address this problem, a forest growth simulation model has been developed to update the database for years between the periodic inventories. In order to aid in forest plan design and management, a three-dimensional virtual reality system of forest landscapes for selected units in the database (compartment or sub-compartment) has also been developed based on Virtual Reality Modeling Language. In addition, a transparent internet publishing system for a spatial database based on open source WebGIS (UMN Map Server) has been designed and utilized to enhance public understanding and encourage free participation of interested parties in the development, implementation, and planning of sustainable forest management.
Rahi, Praveen; Prakash, Om; Shouche, Yogesh S.
2016-01-01
Matrix-assisted laser desorption/ionization time-of-flight mass-spectrometry (MALDI-TOF MS) based biotyping is an emerging technique for high-throughput and rapid microbial identification. Due to its relatively higher accuracy, comprehensive database of clinically important microorganisms and low-cost compared to other microbial identification methods, MALDI-TOF MS has started replacing existing practices prevalent in clinical diagnosis. However, applicability of MALDI-TOF MS in the area of microbial ecology research is still limited mainly due to the lack of data on non-clinical microorganisms. Intense research activities on cultivation of microbial diversity by conventional as well as by innovative and high-throughput methods has substantially increased the number of microbial species known today. This important area of research is in urgent need of rapid and reliable method(s) for characterization and de-replication of microorganisms from various ecosystems. MALDI-TOF MS based characterization, in our opinion, appears to be the most suitable technique for such studies. Reliability of MALDI-TOF MS based identification method depends mainly on accuracy and width of reference databases, which need continuous expansion and improvement. In this review, we propose a common strategy to generate MALDI-TOF MS spectral database and advocated its sharing, and also discuss the role of MALDI-TOF MS based high-throughput microbial identification in microbial ecology studies. PMID:27625644
Fish acute toxicity syndromes and their use in the QSAR approach to hazard assessment.
McKim, J M; Bradbury, S P; Niemi, G J
1987-01-01
Implementation of the Toxic Substances Control Act of 1977 creates the need to reliably establish testing priorities because laboratory resources are limited and the number of industrial chemicals requiring evaluation is overwhelming. The use of quantitative structure activity relationship (QSAR) models as rapid and predictive screening tools to select more potentially hazardous chemicals for in-depth laboratory evaluation has been proposed. Further implementation and refinement of quantitative structure-toxicity relationships in aquatic toxicology and hazard assessment requires the development of a "mode-of-action" database. With such a database, a qualitative structure-activity relationship can be formulated to assign the proper mode of action, and respective QSAR, to a given chemical structure. In this review, the development of fish acute toxicity syndromes (FATS), which are toxic-response sets based on various behavioral and physiological-biochemical measurements, and their projected use in the mode-of-action database are outlined. Using behavioral parameters monitored in the fathead minnow during acute toxicity testing, FATS associated with acetylcholinesterase (AChE) inhibitors and narcotics could be reliably predicted. However, compounds classified as oxidative phosphorylation uncouplers or stimulants could not be resolved. Refinement of this approach by using respiratory-cardiovascular responses in the rainbow trout, enabled FATS associated with AChE inhibitors, convulsants, narcotics, respiratory blockers, respiratory membrane irritants, and uncouplers to be correctly predicted. PMID:3297660
Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory
NASA Astrophysics Data System (ADS)
Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre
2016-05-01
Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.
Warnell, F; George, B; McConachie, H; Johnson, M; Hardy, R; Parr, J R
2015-01-01
Objectives (1) Describe how the Autism Spectrum Database-UK (ASD-UK) was established; (2) investigate the representativeness of the first 1000 children and families who participated, compared to those who chose not to; (3) investigate the reliability of the parent-reported Autism Spectrum Disorder (ASD) diagnoses, and present evidence about the validity of diagnoses, that is, whether children recruited actually have an ASD; (4) present evidence about the representativeness of the ASD-UK children and families, by comparing their characteristics with the first 1000 children and families from the regional Database of children with ASD living in the North East (Daslne), and children and families identified from epidemiological studies. Setting Recruitment through a network of 50 UK child health teams and self-referral. Patients Parents/carers with a child with ASD, aged 2–16 years, completed questionnaires about ASD and some gave professionals’ reports about their children. Results 1000 families registered with ASD-UK in 30 months. Children of families who participated, and of the 208 who chose not to, were found to be very similar on: gender ratio, year of birth, ASD diagnosis and social deprivation score. The reliability of parent-reported ASD diagnoses of children was very high when compared with clinical reports (over 96%); no database child without ASD was identified. A comparison of gender, ASD diagnosis, age at diagnosis, school placement, learning disability, and deprivation score of children and families from ASD-UK with 1084 children and families from Daslne, and families from population studies, showed that ASD-UK families are representative of families of children with ASD overall. Conclusions ASD-UK includes families providing parent-reported data about their child and family, who appear to be broadly representative of UK children with ASD. Families continue to join the databases and more than 3000 families can now be contacted by researchers about UK autism research. PMID:26341584
Hoijemberg, Pablo A; Pelczer, István
2018-01-05
A lot of time is spent by researchers in the identification of metabolites in NMR-based metabolomic studies. The usual metabolite identification starts employing public or commercial databases to match chemical shifts thought to belong to a given compound. Statistical total correlation spectroscopy (STOCSY), in use for more than a decade, speeds the process by finding statistical correlations among peaks, being able to create a better peak list as input for the database query. However, the (normally not automated) analysis becomes challenging due to the intrinsic issue of peak overlap, where correlations of more than one compound appear in the STOCSY trace. Here we present a fully automated methodology that analyzes all STOCSY traces at once (every peak is chosen as driver peak) and overcomes the peak overlap obstacle. Peak overlap detection by clustering analysis and sorting of traces (POD-CAST) first creates an overlap matrix from the STOCSY traces, then clusters the overlap traces based on their similarity and finally calculates a cumulative overlap index (COI) to account for both strong and intermediate correlations. This information is gathered in one plot to help the user identify the groups of peaks that would belong to a single molecule and perform a more reliable database query. The simultaneous examination of all traces reduces the time of analysis, compared to viewing STOCSY traces by pairs or small groups, and condenses the redundant information in the 2D STOCSY matrix into bands containing similar traces. The COI helps in the detection of overlapping peaks, which can be added to the peak list from another cross-correlated band. POD-CAST overcomes the generally overlooked and underestimated presence of overlapping peaks and it detects them to include them in the search of all compounds contributing to the peak overlap, enabling the user to accelerate the metabolite identification process with more successful database queries and searching all tentative compounds in the sample set.
How effective are DNA barcodes in the identification of African rainforest trees?
Parmentier, Ingrid; Duminil, Jérôme; Kuzmina, Maria; Philippe, Morgane; Thomas, Duncan W; Kenfack, David; Chuyong, George B; Cruaud, Corinne; Hardy, Olivier J
2013-01-01
DNA barcoding of rain forest trees could potentially help biologists identify species and discover new ones. However, DNA barcodes cannot always distinguish between closely related species, and the size and completeness of barcode databases are key parameters for their successful application. We test the ability of rbcL, matK and trnH-psbA plastid DNA markers to identify rain forest trees at two sites in Atlantic central Africa under the assumption that a database is exhaustive in terms of species content, but not necessarily in terms of haplotype diversity within species. We assess the accuracy of identification to species or genus using a genetic distance matrix between samples either based on a global multiple sequence alignment (GD) or on a basic local alignment search tool (BLAST). Where a local database is available (within a 50 ha plot), barcoding was generally reliable for genus identification (95-100% success), but less for species identification (71-88%). Using a single marker, best results for species identification were obtained with trnH-psbA. There was a significant decrease of barcoding success in species-rich clades. When the local database was used to identify the genus of trees from another region and did include all genera from the query individuals but not all species, genus identification success decreased to 84-90%. The GD method performed best but a global multiple sequence alignment is not applicable on trnH-psbA. Barcoding is a useful tool to assign unidentified African rain forest trees to a genus, but identification to a species is less reliable, especially in species-rich clades, even using an exhaustive local database. Combining two markers improves the accuracy of species identification but it would only marginally improve genus identification. Finally, we highlight some limitations of the BLAST algorithm as currently implemented and suggest possible improvements for barcoding applications.
How Effective Are DNA Barcodes in the Identification of African Rainforest Trees?
Parmentier, Ingrid; Duminil, Jérôme; Kuzmina, Maria; Philippe, Morgane; Thomas, Duncan W.; Kenfack, David; Chuyong, George B.; Cruaud, Corinne; Hardy, Olivier J.
2013-01-01
Background DNA barcoding of rain forest trees could potentially help biologists identify species and discover new ones. However, DNA barcodes cannot always distinguish between closely related species, and the size and completeness of barcode databases are key parameters for their successful application. We test the ability of rbcL, matK and trnH-psbA plastid DNA markers to identify rain forest trees at two sites in Atlantic central Africa under the assumption that a database is exhaustive in terms of species content, but not necessarily in terms of haplotype diversity within species. Methodology/Principal Findings We assess the accuracy of identification to species or genus using a genetic distance matrix between samples either based on a global multiple sequence alignment (GD) or on a basic local alignment search tool (BLAST). Where a local database is available (within a 50 ha plot), barcoding was generally reliable for genus identification (95–100% success), but less for species identification (71–88%). Using a single marker, best results for species identification were obtained with trnH-psbA. There was a significant decrease of barcoding success in species-rich clades. When the local database was used to identify the genus of trees from another region and did include all genera from the query individuals but not all species, genus identification success decreased to 84–90%. The GD method performed best but a global multiple sequence alignment is not applicable on trnH-psbA. Conclusions/Significance Barcoding is a useful tool to assign unidentified African rain forest trees to a genus, but identification to a species is less reliable, especially in species-rich clades, even using an exhaustive local database. Combining two markers improves the accuracy of species identification but it would only marginally improve genus identification. Finally, we highlight some limitations of the BLAST algorithm as currently implemented and suggest possible improvements for barcoding applications. PMID:23565134
Irinyi, Laszlo; Serena, Carolina; Garcia-Hermoso, Dea; Arabatzis, Michael; Desnos-Ollivier, Marie; Vu, Duong; Cardinali, Gianluigi; Arthur, Ian; Normand, Anne-Cécile; Giraldo, Alejandra; da Cunha, Keith Cassia; Sandoval-Denis, Marcelo; Hendrickx, Marijke; Nishikaku, Angela Satie; de Azevedo Melo, Analy Salles; Merseguel, Karina Bellinghausen; Khan, Aziza; Parente Rocha, Juliana Alves; Sampaio, Paula; da Silva Briones, Marcelo Ribeiro; e Ferreira, Renata Carmona; de Medeiros Muniz, Mauro; Castañón-Olivares, Laura Rosio; Estrada-Barcenas, Daniel; Cassagne, Carole; Mary, Charles; Duan, Shu Yao; Kong, Fanrong; Sun, Annie Ying; Zeng, Xianyu; Zhao, Zuotao; Gantois, Nausicaa; Botterel, Françoise; Robbertse, Barbara; Schoch, Conrad; Gams, Walter; Ellis, David; Halliday, Catriona; Chen, Sharon; Sorrell, Tania C; Piarroux, Renaud; Colombo, Arnaldo L; Pais, Célia; de Hoog, Sybren; Zancopé-Oliveira, Rosely Maria; Taylor, Maria Lucia; Toriello, Conchita; de Almeida Soares, Célia Maria; Delhaes, Laurence; Stubbe, Dirk; Dromer, Françoise; Ranque, Stéphane; Guarro, Josep; Cano-Lira, Jose F; Robert, Vincent; Velegraki, Aristea; Meyer, Wieland
2015-05-01
Human and animal fungal pathogens are a growing threat worldwide leading to emerging infections and creating new risks for established ones. There is a growing need for a rapid and accurate identification of pathogens to enable early diagnosis and targeted antifungal therapy. Morphological and biochemical identification methods are time-consuming and require trained experts. Alternatively, molecular methods, such as DNA barcoding, a powerful and easy tool for rapid monophasic identification, offer a practical approach for species identification and less demanding in terms of taxonomical expertise. However, its wide-spread use is still limited by a lack of quality-controlled reference databases and the evolving recognition and definition of new fungal species/complexes. An international consortium of medical mycology laboratories was formed aiming to establish a quality controlled ITS database under the umbrella of the ISHAM working group on "DNA barcoding of human and animal pathogenic fungi." A new database, containing 2800 ITS sequences representing 421 fungal species, providing the medical community with a freely accessible tool at http://www.isham.org/ and http://its.mycologylab.org/ to rapidly and reliably identify most agents of mycoses, was established. The generated sequences included in the new database were used to evaluate the variation and overall utility of the ITS region for the identification of pathogenic fungi at intra-and interspecies level. The average intraspecies variation ranged from 0 to 2.25%. This highlighted selected pathogenic fungal species, such as the dermatophytes and emerging yeast, for which additional molecular methods/genetic markers are required for their reliable identification from clinical and veterinary specimens. © The Author 2015. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Species longevity in North American fossil mammals.
Prothero, Donald R
2014-08-01
Species longevity in the fossil record is related to many paleoecological variables and is important to macroevolutionary studies, yet there are very few reliable data on average species durations in Cenozoic fossil mammals. Many of the online databases (such as the Paleobiology Database) use only genera of North American Cenozoic mammals and there are severe problems because key groups (e.g. camels, oreodonts, pronghorns and proboscideans) have no reliable updated taxonomy, with many invalid genera and species and/or many undescribed genera and species. Most of the published datasets yield species duration estimates of approximately 2.3-4.3 Myr for larger mammals, with small mammals tending to have shorter species durations. My own compilation of all the valid species durations in families with updated taxonomy (39 families, containing 431 genera and 998 species, averaging 2.3 species per genus) yields a mean duration of 3.21 Myr for larger mammals. This breaks down to 4.10-4.39 Myr for artiodactyls, 3.14-3.31 Myr for perissodactyls and 2.63-2.95 Myr for carnivorous mammals (carnivorans plus creodonts). These averages are based on a much larger, more robust dataset than most previous estimates, so they should be more reliable for any studies that need species longevity to be accurately estimated. © 2013 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.
PASS2: an automated database of protein alignments organised as structural superfamilies.
Bhaduri, Anirban; Pugalenthi, Ganesan; Sowdhamini, Ramanathan
2004-04-02
The functional selection and three-dimensional structural constraints of proteins in nature often relates to the retention of significant sequence similarity between proteins of similar fold and function despite poor sequence identity. Organization of structure-based sequence alignments for distantly related proteins, provides a map of the conserved and critical regions of the protein universe that is useful for the analysis of folding principles, for the evolutionary unification of protein families and for maximizing the information return from experimental structure determination. The Protein Alignment organised as Structural Superfamily (PASS2) database represents continuously updated, structural alignments for evolutionary related, sequentially distant proteins. An automated and updated version of PASS2 is, in direct correspondence with SCOP 1.63, consisting of sequences having identity below 40% among themselves. Protein domains have been grouped into 628 multi-member superfamilies and 566 single member superfamilies. Structure-based sequence alignments for the superfamilies have been obtained using COMPARER, while initial equivalencies have been derived from a preliminary superposition using LSQMAN or STAMP 4.0. The final sequence alignments have been annotated for structural features using JOY4.0. The database is supplemented with sequence relatives belonging to different genomes, conserved spatially interacting and structural motifs, probabilistic hidden markov models of superfamilies based on the alignments and useful links to other databases. Probabilistic models and sensitive position specific profiles obtained from reliable superfamily alignments aid annotation of remote homologues and are useful tools in structural and functional genomics. PASS2 presents the phylogeny of its members both based on sequence and structural dissimilarities. Clustering of members allows us to understand diversification of the family members. The search engine has been improved for simpler browsing of the database. The database resolves alignments among the structural domains consisting of evolutionarily diverged set of sequences. Availability of reliable sequence alignments of distantly related proteins despite poor sequence identity and single-member superfamilies permit better sampling of structures in libraries for fold recognition of new sequences and for the understanding of protein structure-function relationships of individual superfamilies. PASS2 is accessible at http://www.ncbs.res.in/~faculty/mini/campass/pass2.html
A comprehensive review of the psychometric properties of the Drug Abuse Screening Test.
Yudko, Errol; Lozhkina, Olga; Fouts, Adriana
2007-03-01
This article reviews the reliability and the validity of the (10-, 20-, and 28-item) Drug Abuse Screening Test (DAST). The reliability and the validity of the adolescent version of the DAST are also reviewed. An extensive literature review was conducted using the Medline and Psychinfo databases from the years 1982 to 2005. All articles that addressed the reliability and the validity of the DAST were examined. Publications in which the DAST was used as a screening tool but had no data on its psychometric properties were not included. Descriptive information about each version of the test, as well as discussion of the empirical literature that has explored measures of the reliability and the validity of the DAST, has been included. The DAST tended to have moderate to high levels of test-retest, interitem, and item-total reliabilities. The DAST also tended to have moderate to high levels of validity, sensitivity, and specificity. In general, all versions of the DAST yield satisfactory measures of reliability and validity for use as clinical or research tools. Furthermore, these tests are easy to administer and have been used in a variety of populations.
Difficulties in diagnosing Marfan syndrome using current FBN1 databases.
Groth, Kristian A; Gaustadnes, Mette; Thorsen, Kasper; Østergaard, John R; Jensen, Uffe Birk; Gravholt, Claus H; Andersen, Niels H
2016-01-01
The diagnostic criteria of Marfan syndrome (MFS) highlight the importance of a FBN1 mutation test in diagnosing MFS. As genetic sequencing becomes better, cheaper, and more accessible, the expected increase in the number of genetic tests will become evident, resulting in numerous genetic variants that need to be evaluated for disease-causing effects based on database information. The aim of this study was to evaluate genetic variants in four databases and review the relevant literature. We assessed background data on 23 common variants registered in ESP6500 and classified as causing MFS in the Human Gene Mutation Database (HGMD). We evaluated data in four variant databases (HGMD, UMD-FBN1, ClinVar, and UniProt) according to the diagnostic criteria for MFS and compared the results with the classification of each variant in the four databases. None of the 23 variants was clearly associated with MFS, even though all classifications in the databases stated otherwise. A genetic diagnosis of MFS cannot reliably be based on current variant databases because they contain incorrectly interpreted conclusions on variants. Variants must be evaluated by time-consuming review of the background material in the databases and by combining these data with expert knowledge on MFS. This is a major problem because we expect even more genetic test results in the near future as a result of the reduced cost and process time for next-generation sequencing.Genet Med 18 1, 98-102.
A dedicated database system for handling multi-level data in systems biology.
Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens
2014-01-01
Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.
Zaki, Rafdzah; Bulgiba, Awang; Nordin, Noorhaire; Azina Ismail, Noor
2013-06-01
Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. The Intra-class Correlation Coefficient (ICC) is the most popular method with 25 (60%) studies having used this method followed by the comparing means (8 or 19%). Out of 25 studies using the ICC, only 7 (28%) reported the confidence intervals and types of ICC used. Most studies (71%) also tested the agreement of instruments. This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
Reliability and validity of current physical examination techniques of the foot and ankle.
Wrobel, James S; Armstrong, David G
2008-01-01
This literature review was undertaken to evaluate the reliability and validity of the orthopedic, neurologic, and vascular examination of the foot and ankle. We searched PubMed-the US National Library of Medicine's database of biomedical citations-and abstracts for relevant publications from 1966 to 2006. We also searched the bibliographies of the retrieved articles. We identified 35 articles to review. For discussion purposes, we used reliability interpretation guidelines proposed by others. For the kappa statistic that calculates reliability for dichotomous (eg, yes or no) measures, reliability was defined as moderate (0.4-0.6), substantial (0.6-0.8), and outstanding (> 0.8). For the intraclass correlation coefficient that calculates reliability for continuous (eg, degrees of motion) measures, reliability was defined as good (> 0.75), moderate (0.5-0.75), and poor (< 0.5). Intraclass correlations, based on the various examinations performed, varied widely. The range was from 0.08 to 0.98, depending on the examination performed. Concurrent and predictive validity ranged from poor to good. Although hundreds of articles exist describing various methods of lower-extremity assessment, few rigorously assess the measurement properties. This information can be used both by the discerning clinician in the art of clinical examination and by the scientist in the measurement properties of reproducibility and validity.
Systematic review of methods for quantifying teamwork in the operating theatre
Marshall, D.; Sykes, M.; McCulloch, P.; Shalhoub, J.; Maruthappu, M.
2018-01-01
Background Teamwork in the operating theatre is becoming increasingly recognized as a major factor in clinical outcomes. Many tools have been developed to measure teamwork. Most fall into two categories: self‐assessment by theatre staff and assessment by observers. A critical and comparative analysis of the validity and reliability of these tools is lacking. Methods MEDLINE and Embase databases were searched following PRISMA guidelines. Content validity was assessed using measurements of inter‐rater agreement, predictive validity and multisite reliability, and interobserver reliability using statistical measures of inter‐rater agreement and reliability. Quantitative meta‐analysis was deemed unsuitable. Results Forty‐eight articles were selected for final inclusion; self‐assessment tools were used in 18 and observational tools in 28, and there were two qualitative studies. Self‐assessment of teamwork by profession varied with the profession of the assessor. The most robust self‐assessment tool was the Safety Attitudes Questionnaire (SAQ), although this failed to demonstrate multisite reliability. The most robust observational tool was the Non‐Technical Skills (NOTECHS) system, which demonstrated both test–retest reliability (P > 0·09) and interobserver reliability (Rwg = 0·96). Conclusion Self‐assessment of teamwork by the theatre team was influenced by professional differences. Observational tools, when used by trained observers, circumvented this.
Prototype of web-based database of surface wave investigation results for site classification
NASA Astrophysics Data System (ADS)
Hayashi, K.; Cakir, R.; Martin, A. J.; Craig, M. S.; Lorenzo, J. M.
2016-12-01
As active and passive surface wave methods are getting popular for evaluating site response of earthquake ground motion, demand on the development of database for investigation results is also increasing. Seismic ground motion not only depends on 1D velocity structure but also on 2D and 3D structures so that spatial information of S-wave velocity must be considered in ground motion prediction. The database can support to construct 2D and 3D underground models. Inversion of surface wave processing is essentially non-unique so that other information must be combined into the processing. The database of existed geophysical, geological and geotechnical investigation results can provide indispensable information to improve the accuracy and reliability of investigations. Most investigations, however, are carried out by individual organizations and investigation results are rarely stored in the unified and organized database. To study and discuss appropriate database and digital standard format for the surface wave investigations, we developed a prototype of web-based database to store observed data and processing results of surface wave investigations that we have performed at more than 400 sites in U.S. and Japan. The database was constructed on a web server using MySQL and PHP so that users can access to the database through the internet from anywhere with any device. All data is registered in the database with location and users can search geophysical data through Google Map. The database stores dispersion curves, horizontal to vertical spectral ratio and S-wave velocity profiles at each site that was saved in XML files as digital data so that user can review and reuse them. The database also stores a published 3D deep basin and crustal structure and user can refer it during the processing of surface wave data.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication.
Katagiri, Keita; Sato, Koya; Fujii, Takeo
2018-04-12
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.
Shoberg, Thomas G.; Stoddard, Paul R.
2013-01-01
The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.
BioQ: tracing experimental origins in public genomic databases using a novel data provenance model
Saccone, Scott F.; Quan, Jiaxi; Jones, Peter L.
2012-01-01
Motivation: Public genomic databases, which are often used to guide genetic studies of human disease, are now being applied to genomic medicine through in silico integrative genomics. These databases, however, often lack tools for systematically determining the experimental origins of the data. Results: We introduce a new data provenance model that we have implemented in a public web application, BioQ, for assessing the reliability of the data by systematically tracing its experimental origins to the original subjects and biologics. BioQ allows investigators to both visualize data provenance as well as explore individual elements of experimental process flow using precise tools for detailed data exploration and documentation. It includes a number of human genetic variation databases such as the HapMap and 1000 Genomes projects. Availability and implementation: BioQ is freely available to the public at http://bioq.saclab.net Contact: ssaccone@wustl.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22426342
An Algorithm of Association Rule Mining for Microbial Energy Prospection
Shaheen, Muhammad; Shahbaz, Muhammad
2017-01-01
The presence of hydrocarbons beneath earth’s surface produces some microbiological anomalies in soils and sediments. The detection of such microbial populations involves pure bio chemical processes which are specialized, expensive and time consuming. This paper proposes a new algorithm of context based association rule mining on non spatial data. The algorithm is a modified form of already developed algorithm which was for spatial database only. The algorithm is applied to mine context based association rules on microbial database to extract interesting and useful associations of microbial attributes with existence of hydrocarbon reserve. The surface and soil manifestations caused by the presence of hydrocarbon oxidizing microbes are selected from existing literature and stored in a shared database. The algorithm is applied on the said database to generate direct and indirect associations among the stored microbial indicators. These associations are then correlated with the probability of hydrocarbon’s existence. The numerical evaluation shows better accuracy for non-spatial data as compared to conventional algorithms at generating reliable and robust rules. PMID:28393846
PseudoBase: a database with RNA pseudoknots.
van Batenburg, F H; Gultyaev, A P; Pleij, C W; Ng, J; Oliehoek, J
2000-01-01
PseudoBase is a database containing structural, functional and sequence data related to RNA pseudo-knots. It can be reached at http://wwwbio. Leiden Univ.nl/ approximately Batenburg/PKB.html. This page will direct the user to a retrieval page from where a particular pseudoknot can be chosen, or to a submission page which enables the user to add pseudoknot information to the database or to an informative page that elaborates on the various aspects of the database. For each pseudoknot, 12 items are stored, e.g. the nucleotides of the region that contains the pseudoknot, the stem positions of the pseudoknot, the EMBL accession number of the sequence that contains this pseudoknot and the support that can be given regarding the reliability of the pseudoknot. Access is via a small number of steps, using 16 different categories. The development process was done by applying the evolutionary methodology for software development rather than by applying the methodology of the classical waterfall model or the more modern spiral model.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication †
Katagiri, Keita; Fujii, Takeo
2018-01-01
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174
An X-Ray Analysis Database of Photoionization Cross Sections Including Variable Ionization
NASA Technical Reports Server (NTRS)
Wang, Ping; Cohen, David H.; MacFarlane, Joseph J.; Cassinelli, Joseph P.
1997-01-01
Results of research efforts in the following areas are discussed: review of the major theoretical and experimental data of subshell photoionization cross sections and ionization edges of atomic ions to assess the accuracy of the data, and to compile the most reliable of these data in our own database; detailed atomic physics calculations to complement the database for all ions of 17 cosmically abundant elements; reconciling the data from various sources and our own calculations; and fitting cross sections with functional approximations and incorporating these functions into a compact computer code.Also, efforts included adapting an ionization equilibrium code, tabulating results, and incorporating them into the overall program and testing the code (both ionization equilibrium and opacity codes) with existing observational data. The background and scientific applications of this work are discussed. Atomic physics cross section models and calculations are described. Calculation results are compared with available experimental data and other theoretical data. The functional approximations used for fitting cross sections are outlined and applications of the database are discussed.
Puncture-proof picture archiving and communication system.
Willis, C E; McCluggage, C W; Orand, M R; Parker, B R
2001-06-01
As we become increasingly dependent on our picture archiving and communications system (PACS) for the clinical practice of medicine, the demand for improved reliability becomes urgent. Borrowing principles from the discipline of Reliability Engineering, we have identified components of our system that constitute single points of failure and have endeavored to eliminate these through redundant components and manual work-around procedures. To assess the adequacy of our preparations, we have identified a set of plausible events that could interfere with the function of one or more of our PACS components. These events could be as simple as the loss of the network connection to a single component or as broad as the loss of our central data center. We have identified the need to continue to operate during adverse conditions, as well as the requirement to recover rapidly from major disruptions in service. This assessment led us to modify the physical locations of central PACS components within our physical plant. We are also taking advantage of actual disruptive events coincident with a major expansion of our facility to test our recovery procedures. Based on our recognition of the vital nature of our electronic images for patient care, we are now recording electronic images in two copies on disparate media. The image database is critical to both continued operations and recovery. Restoration of the database from periodic tape backups with a 24-hour cycle time may not support our clinical scenario: acquisition modalities have a limited local storage capacity, some of which will not contain the daily workload. Restoration of the database from the archived media is an exceedingly slow process, that will likely not meet our requirement to restore clinical operations without significant delay. Our PACS vendor is working on concurrent image databases that would be capable of nearly immediate switchover and recovery.
Reliability of intracerebral hemorrhage classification systems: A systematic review.
Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm
2016-08-01
Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.
VizieR Online Data Catalog: Sloan magnitudes for the brightest stars, V2 (Mallama, 2018)
NASA Astrophysics Data System (ADS)
Mallama, A.
2018-05-01
A new version of the Catalog containing Sloan magnitudes for the brightest stars is presented. The accuracy of the data indicates that the Catalog is a reliable source of comparison star magnitudes for astronomical photometry. Version 2 complements the APASS database of fainter stars. (1 data file).
Development and Validation of a Behavioral Screener for Preschool-Age Children
ERIC Educational Resources Information Center
DiStefano, Christine A.; Kamphaus, Randy W.
2007-01-01
The purpose of this study was to document the development of a short behavioral scale that could be used to assess preschoolers' behavior while still retaining adequate scale coverage, reliability, and validity. Factor analysis and item analysis techniques were applied to data from a nationally representative, normative database to create a…
2016-03-01
Tinker DRA-3 Chem. Ox. Potassium permanganate 10 2.2 Advantages and Limitations Potential advantages and disadvantages of our dataset, and...Washington DC. Thomson, N.R., E.D. Hood, and G.J. Farquhar, 2007. “ Permanganate Treatment of an Emplaced DNAPL Source,” Ground Water Monitoring
The Case for Creating a Scholars Portal to the Web: A White Paper.
ERIC Educational Resources Information Center
Campbell, Jerry D.
2001-01-01
Considers the need for reliable, scholarly access to the Web and suggests that the Association for Research Libraries, in partnership with OCLC and the Library of Congress, develop a so-called scholar's portal. Topics include quality content; enhanced library services; and gateway functions, including access to commercial databases and focused…
Reliable Record Matching for a College Admissions System.
ERIC Educational Resources Information Center
Fitt, Paul D.
Prospective student data, supplied by various national college testing and student search services, can be matched with existing student records in a college admissions database. Instead of relying on one unique record identifier, such as the student's social security number, a technique has been developed that is based on a number of common data…
Helping Students Succeed. Annual Report, 2010
ERIC Educational Resources Information Center
New Mexico Higher Education Department, 2010
2010-01-01
This annual report contains postsecondary data that has been collected and analyzed using the New Mexico Higher Education Department's Data Editing and Reporting (DEAR) database, unless otherwise noted. The purpose of the DEAR system is to increase the reliability in the data and to make more efficient efforts by institutions and the New Mexico…
Space debris mitigation - engineering strategies
NASA Astrophysics Data System (ADS)
Taylor, E.; Hammond, M.
The problem of space debris pollution is acknowledged to be of growing concern by space agencies, leading to recent activities in the field of space debris mitigation. A review of the current (and near-future) mitigation guidelines, handbooks, standards and licensing procedures has identified a number of areas where further work is required. In order for space debris mitigation to be implemented in spacecraft manufacture and operation, the authors suggest that debris-related criteria need to become design parameters (following the same process as applied to reliability and radiation). To meet these parameters, spacecraft manufacturers and operators will need processes (supported by design tools and databases and implementation standards). A particular aspect of debris mitigation, as compared with conventional requirements (e.g. radiation and reliability) is the current and near-future national and international regulatory framework and associated liability aspects. A framework for these implementation standards is presented, in addition to results of in-house research and development on design tools and databases (including collision avoidance in GTO and SSTO and evaluation of failure criteria on composite and aluminium structures).
A graph-based semantic similarity measure for the gene ontology.
Alvarez, Marco A; Yan, Changhui
2011-12-01
Existing methods for calculating semantic similarities between pairs of Gene Ontology (GO) terms and gene products often rely on external databases like Gene Ontology Annotation (GOA) that annotate gene products using the GO terms. This dependency leads to some limitations in real applications. Here, we present a semantic similarity algorithm (SSA), that relies exclusively on the GO. When calculating the semantic similarity between a pair of input GO terms, SSA takes into account the shortest path between them, the depth of their nearest common ancestor, and a novel similarity score calculated between the definitions of the involved GO terms. In our work, we use SSA to calculate semantic similarities between pairs of proteins by combining pairwise semantic similarities between the GO terms that annotate the involved proteins. The reliability of SSA was evaluated by comparing the resulting semantic similarities between proteins with the functional similarities between proteins derived from expert annotations or sequence similarity. Comparisons with existing state-of-the-art methods showed that SSA is highly competitive with the other methods. SSA provides a reliable measure for semantics similarity independent of external databases of functional-annotation observations.
An intelligent remote monitoring system for artificial heart.
Choi, Jaesoon; Park, Jun W; Chung, Jinhan; Min, Byoung G
2005-12-01
A web-based database system for intelligent remote monitoring of an artificial heart has been developed. It is important for patients with an artificial heart implant to be discharged from the hospital after an appropriate stabilization period for better recovery and quality of life. Reliable continuous remote monitoring systems for these patients with life support devices are gaining practical meaning. The authors have developed a remote monitoring system for this purpose that consists of a portable/desktop monitoring terminal, a database for continuous recording of patient and device status, a web-based data access system with which clinicians can access real-time patient and device status data and past history data, and an intelligent diagnosis algorithm module that noninvasively estimates blood pump output and makes automatic classification of the device status. The system has been tested with data generation emulators installed on remote sites for simulation study, and in two cases of animal experiments conducted at remote facilities. The system showed acceptable functionality and reliability. The intelligence algorithm also showed acceptable practicality in an application to animal experiment data.
A proposed group management scheme for XTP multicast
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.
Thielen, F W; Van Mastrigt, Gapg; Burgers, L T; Bramer, W M; Majoie, Hjm; Evers, Smaa; Kleijnen, J
2016-12-01
This article is part of the series "How to prepare a systematic review of economic evaluations (EES) for informing evidence-based healthcare decisions", in which a five-step approach is proposed. Areas covered: This paper focuses on the selection of relevant databases and developing a search strategy for detecting EEs, as well as on how to perform the search and how to extract relevant data from retrieved records. Expert commentary: Thus far, little has been published on how to conduct systematic review EEs. Moreover, reliable sources of information, such as the Health Economic Evaluation Database, have ceased to publish updates. Researchers are thus left without authoritative guidance on how to conduct SR-EEs. Together with van Mastrigt et al. we seek to fill this gap.
Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao
2015-09-01
This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.
Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio
2016-01-01
In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy. PMID:26824080
The reliability of a quality appraisal tool for studies of diagnostic reliability (QAREL).
Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Rickards, Luke; Turner, Robin; Bogduk, Nikolai
2013-09-09
The aim of this project was to investigate the reliability of a new 11-item quality appraisal tool for studies of diagnostic reliability (QAREL). The tool was tested on studies reporting the reliability of any physical examination procedure. The reliability of physical examination is a challenging area to study given the complex testing procedures, the range of tests, and lack of procedural standardisation. Three reviewers used QAREL to independently rate 29 articles, comprising 30 studies, published during 2007. The articles were identified from a search of relevant databases using the following string: "Reproducibility of results (MeSH) OR reliability (t.w.) AND Physical examination (MeSH) OR physical examination (t.w.)." A total of 415 articles were retrieved and screened for inclusion. The reviewers undertook an independent trial assessment prior to data collection, followed by a general discussion about how to score each item. At no time did the reviewers discuss individual papers. Reliability was assessed for each item using multi-rater kappa (κ). Multi-rater reliability estimates ranged from κ = 0.27 to 0.92 across all items. Six items were recorded with good reliability (κ > 0.60), three with moderate reliability (κ = 0.41 - 0.60), and two with fair reliability (κ = 0.21 - 0.40). Raters found it difficult to agree about the spectrum of patients included in a study (Item 1) and the correct application and interpretation of the test (Item 10). In this study, we found that QAREL was a reliable assessment tool for studies of diagnostic reliability when raters agreed upon criteria for the interpretation of each item. Nine out of 11 items had good or moderate reliability, and two items achieved fair reliability. The heterogeneity in the tests included in this study may have resulted in an underestimation of the reliability of these two items. We discuss these and other factors that could affect our results and make recommendations for the use of QAREL.
Enhanced Component Performance Study: Air-Operated Valves 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents a performance evaluation of air-operated valves (AOVs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The AOV failure modes considered are failure-to-open/close, failure to operate or control, and spurious operation. The component reliability estimates and the reliability data are trended for the most recent 10-year period, while yearly estimates for reliability are provided for the entire active period. One statistically significantmore » trend was observed in the AOV data: The frequency of demands per reactor year for valves recording the fail-to-open or fail-to-close failure modes, for high-demand valves (those with greater than twenty demands per year), was found to be decreasing. The decrease was about three percent over the ten year period trended.« less
Barrett, Eva; McCreesh, Karen; Lewis, Jeremy
2014-02-01
A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.
Piqueras, Jose A; Martín-Vivar, María; Sandin, Bonifacio; San Luis, Concepción; Pineda, David
2017-08-15
Anxiety and depression are among the most common mental disorders during childhood and adolescence. Among the instruments for the brief screening assessment of symptoms of anxiety and depression, the Revised Child Anxiety and Depression Scale (RCADS) is one of the more widely used. Previous studies have demonstrated the reliability of the RCADS for different assessment settings and different versions. The aims of this study were to examine the mean reliability of the RCADS and the influence of the moderators on the RCADS reliability. We searched in EBSCO, PsycINFO, Google Scholar, Web of Science, and NCBI databases and other articles manually from lists of references of extracted articles. A total of 146 studies were included in our meta-analysis. The RCADS showed robust internal consistency reliability in different assessment settings, countries, and languages. We only found that reliability of the RCADS was significantly moderated by the version of RCADS. However, these differences in reliability between different versions of the RCADS were slight and can be due to the number of items. We did not examine factor structure, factorial invariance across gender, age, or country, and test-retest reliability of the RCADS. The RCADS is a reliable instrument for cross-cultural use, with the advantage of providing more information with a low number of items in the assessment of both anxiety and depression symptoms in children and adolescents. Copyright © 2017. Published by Elsevier B.V.
Very large database of lipids: rationale and design.
Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R
2013-11-01
Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.
Birch, Colin P D; Del Rio Vilas, Victor J; Chikukwa, Ambrose C
2010-09-01
Movement records are often used to identify animal sample provenance by retracing the movements of individuals. Here we present an alternative method, which uses the same identity tags and movement records as are used to retrace movements, but ignores individual movement paths. The first step uses a simple query to identify the most likely birth holding for every identity tag included in a database recording departures from agricultural holdings. The second step rejects a proportion of the birth holding locations to leave a list of birth holding locations that are relatively reliable. The method was used to trace the birth locations of sheep sampled for scrapie in abattoirs, or on farm as fallen stock. Over 82% of the sheep sampled in the fallen stock survey died at the holding of birth. This lack of movement may be an important constraint on scrapie transmission. These static sheep provided relatively reliable birth locations, which were used to define criteria for selecting reliable traces. The criteria rejected 16.8% of fallen stock traces and 11.9% of abattoir survey traces. Two tests provided estimates that selection reduced error in fallen stock traces from 11.3% to 3.2%, and in abattoir survey traces from 8.1% to 1.8%. This method generated 14,591 accepted traces of fallen stock from samples taken during 2002-2005 and 83,136 accepted traces from abattoir samples. The absence or ambiguity of flock tag records at the time of slaughter prevented the tracing of 16-24% of abattoir samples during 2002-2004, although flock tag records improved in 2005. The use of internal scoring to generate and evaluate results from the database query, and the confirmation of results by comparison with other database fields, are analogous to methods used in web search engines. Such methods may have wide application in tracing samples and in adding value to biological datasets. Crown Copyright 2010. Published by Elsevier B.V. All rights reserved.
Saatkamp, Arne; Affre, Laurence; Dutoit, Thierry; Poschlod, Peter
2009-09-01
Seed survival in the soil contributes to population persistence and community diversity, creating a need for reliable measures of soil seed bank persistence. Several methods estimate soil seed bank persistence, most of which count seedlings emerging from soil samples. Seasonality, depth distribution and presence (or absence) in vegetation are then used to classify a species' soil seed bank into persistent or transient, often synthesized into a longevity index. This study aims to determine if counts of seedlings from soil samples yield reliable seed bank persistence estimates and if this is correlated to seed production. Seeds of 38 annual weeds taken from arable fields were buried in the field and their viability tested by germination and tetrazolium tests at 6 month intervals for 2.5 years. This direct measure of soil seed survival was compared with indirect estimates from the literature, which use seedling emergence from soil samples to determine seed bank persistence. Published databases were used to explore the generality of the influence of reproductive capacity on seed bank persistence estimates from seedling emergence data. There was no relationship between a species' soil seed survival in the burial experiment and its seed bank persistence estimate from published data using seedling emergence from soil samples. The analysis of complementary data from published databases revealed that while seed bank persistence estimates based on seedling emergence from soil samples are generally correlated with seed production, estimates of seed banks from burial experiments are not. The results can be explained in terms of the seed size-seed number trade-off, which suggests that the higher number of smaller seeds is compensated after germination. Soil seed bank persistence estimates correlated to seed production are therefore not useful for studies on population persistence or community diversity. Confusion of soil seed survival and seed production can be avoided by separate use of soil seed abundance and experimental soil seed survival.
NPTool: Towards Scalability and Reliability of Business Process Management
NASA Astrophysics Data System (ADS)
Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton
Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.
OperomeDB: A Database of Condition-Specific Transcription Units in Prokaryotic Genomes.
Chetal, Kashish; Janga, Sarath Chandra
2015-01-01
Background. In prokaryotic organisms, a substantial fraction of adjacent genes are organized into operons-codirectionally organized genes in prokaryotic genomes with the presence of a common promoter and terminator. Although several available operon databases provide information with varying levels of reliability, very few resources provide experimentally supported results. Therefore, we believe that the biological community could benefit from having a new operon prediction database with operons predicted using next-generation RNA-seq datasets. Description. We present operomeDB, a database which provides an ensemble of all the predicted operons for bacterial genomes using available RNA-sequencing datasets across a wide range of experimental conditions. Although several studies have recently confirmed that prokaryotic operon structure is dynamic with significant alterations across environmental and experimental conditions, there are no comprehensive databases for studying such variations across prokaryotic transcriptomes. Currently our database contains nine bacterial organisms and 168 transcriptomes for which we predicted operons. User interface is simple and easy to use, in terms of visualization, downloading, and querying of data. In addition, because of its ability to load custom datasets, users can also compare their datasets with publicly available transcriptomic data of an organism. Conclusion. OperomeDB as a database should not only aid experimental groups working on transcriptome analysis of specific organisms but also enable studies related to computational and comparative operomics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarmie, N.; Rogers, F.J.
The authors have completed a 5-year survey (1991--1995) of macromycetes found in Los Alamos County, Los Alamos National Laboratory, and Bandelier National Monument. The authors have compiled a database of 1,048 collections, their characteristics, and identifications. The database represents 123 (98%) genera and 175 (73%) species reliably identified. Issues of habitat loss, species extinction, and ecological relationships are addressed, and comparisons with other surveys are made. With this baseline information and modeling of this baseline data, one can begin to understand more about the fungal flora of the area.
NCBI2RDF: Enabling Full RDF-Based Access to NCBI Databases
Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor
2013-01-01
RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments. PMID:23984425
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Challenges to Global Implementation of Infrared Thermography Technology: Current Perspective
Shterenshis, Michael
2017-01-01
Medical infrared thermography (IT) produces an image of the infrared waves emitted by the human body as part of the thermoregulation process that can vary in intensity based on the health of the person. This review analyzes recent developments in the use of infrared thermography as a screening and diagnostic tool in clinical and nonclinical settings, and identifies possible future routes for improvement of the method. Currently, infrared thermography is not considered to be a fully reliable diagnostic method. If standard infrared protocol is established and a normative database is available, infrared thermography may become a reliable method for detecting inflammatory processes. PMID:29138741
Challenges to Global Implementation of Infrared Thermography Technology: Current Perspective.
Shterenshis, Michael
2017-01-01
Medical infrared thermography (IT) produces an image of the infrared waves emitted by the human body as part of the thermoregulation process that can vary in intensity based on the health of the person. This review analyzes recent developments in the use of infrared thermography as a screening and diagnostic tool in clinical and nonclinical settings, and identifies possible future routes for improvement of the method. Currently, infrared thermography is not considered to be a fully reliable diagnostic method. If standard infrared protocol is established and a normative database is available, infrared thermography may become a reliable method for detecting inflammatory processes.
The use of intelligent database systems in acute pancreatitis--a systematic review.
van den Heever, Marc; Mittal, Anubhav; Haydock, Matthew; Windsor, John
2014-01-01
Acute pancreatitis (AP) is a complex disease with multiple aetiological factors, wide ranging severity, and multiple challenges to effective triage and management. Databases, data mining and machine learning algorithms (MLAs), including artificial neural networks (ANNs), may assist by storing and interpreting data from multiple sources, potentially improving clinical decision-making. 1) Identify database technologies used to store AP data, 2) collate and categorise variables stored in AP databases, 3) identify the MLA technologies, including ANNs, used to analyse AP data, and 4) identify clinical and non-clinical benefits and obstacles in establishing a national or international AP database. Comprehensive systematic search of online reference databases. The predetermined inclusion criteria were all papers discussing 1) databases, 2) data mining or 3) MLAs, pertaining to AP, independently assessed by two reviewers with conflicts resolved by a third author. Forty-three papers were included. Three data mining technologies and five ANN methodologies were reported in the literature. There were 187 collected variables identified. ANNs increase accuracy of severity prediction, one study showed ANNs had a sensitivity of 0.89 and specificity of 0.96 six hours after admission--compare APACHE II (cutoff score ≥8) with 0.80 and 0.85 respectively. Problems with databases were incomplete data, lack of clinical data, diagnostic reliability and missing clinical data. This is the first systematic review examining the use of databases, MLAs and ANNs in the management of AP. The clinical benefits these technologies have over current systems and other advantages to adopting them are identified. Copyright © 2013 IAP and EPC. Published by Elsevier B.V. All rights reserved.
Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.
2006-01-01
Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.
ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes.
Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim
2010-03-01
Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith-Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. The database can be accessed through http://proteinworlddb.org
Intelligent databases assist transparent and sound economic valuation of ecosystem services.
Villa, Ferdinando; Ceroni, Marta; Krivov, Sergey
2007-06-01
Assessment and economic valuation of services provided by ecosystems to humans has become a crucial phase in environmental management and policy-making. As primary valuation studies are out of the reach of many institutions, secondary valuation or benefit transfer, where the results of previous studies are transferred to the geographical, environmental, social, and economic context of interest, is becoming increasingly common. This has brought to light the importance of environmental valuation databases, which provide reliable valuation data to inform secondary valuation with enough detail to enable the transfer of values across contexts. This paper describes the role of next-generation, intelligent databases (IDBs) in assisting the activity of valuation. Such databases employ artificial intelligence to inform the transfer of values across contexts, enforcing comparability of values and allowing users to generate custom valuation portfolios that synthesize previous studies and provide aggregated value estimates to use as a base for secondary valuation. After a general introduction, we introduce the Ecosystem Services Database, the first IDB for environmental valuation to be made available to the public, describe its functionalities and the lessons learned from its usage, and outline the remaining needs and expected future developments in the field.
Smeets, Hugo M; de Wit, Niek J; Hoes, Arno W
2011-04-01
Observational studies performed within routine health care databases have the advantage of their large size and, when the aim is to assess the effect of interventions, can offer a completion to randomized controlled trials with usually small samples from experimental situations. Institutional Health Insurance Databases (HIDs) are attractive for research because of their large size, their longitudinal perspective, and their practice-based information. As they are based on financial reimbursement, the information is generally reliable. The database of one of the major insurance companies in the Netherlands, the Agis Health Database (AHD), is described in detail. Whether the AHD data sets meet the specific requirements to conduct several types of clinical studies is discussed according to the classification of the four different types of clinical research; that is, diagnostic, etiologic, prognostic, and intervention research. The potential of the AHD for these various types of research is illustrated using examples of studies recently conducted in the AHD. HIDs such as the AHD offer large potential for several types of clinical research, in particular etiologic and intervention studies, but at present the lack of detailed clinical information is an important limitation. Copyright © 2011 Elsevier Inc. All rights reserved.
Ojima-Kato, Teruyo; Yamamoto, Naomi; Takahashi, Hajime; Tamura, Hiroto
2016-01-01
The genetic lineages of Listeria monocytogenes and other species of the genus Listeria are correlated with pathogenesis in humans. Although matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has become a prevailing tool for rapid and reliable microbial identification, the precise discrimination of Listeria species and lineages remains a crucial issue in clinical settings and for food safety. In this study, we constructed an accurate and reliable MS database to discriminate the lineages of L. monocytogenes and the species of Listeria (L. monocytogenes, L. innocua, L. welshimeri, L. seeligeri, L. ivanovii, L. grayi, and L. rocourtiae) based on the S10-spc-alpha operon gene encoded ribosomal protein mass spectrum (S10-GERMS) proteotyping method, which relies on both genetic information (genomics) and observed MS peaks in MALDI-TOF MS (proteomics). The specific set of eight biomarkers (ribosomal proteins L24, L6, L18, L15, S11, S9, L31 type B, and S16) yielded characteristic MS patterns for the lineages of L. monocytogenes and the different species of Listeria, and led to the construction of a MS database that was successful in discriminating between these organisms in MALDI-TOF MS fingerprinting analysis followed by advanced proteotyping software Strain Solution analysis. We also confirmed the constructed database on the proteotyping software Strain Solution by using 23 Listeria strains collected from natural sources.
Yamamoto, Naomi; Takahashi, Hajime; Tamura, Hiroto
2016-01-01
The genetic lineages of Listeria monocytogenes and other species of the genus Listeria are correlated with pathogenesis in humans. Although matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has become a prevailing tool for rapid and reliable microbial identification, the precise discrimination of Listeria species and lineages remains a crucial issue in clinical settings and for food safety. In this study, we constructed an accurate and reliable MS database to discriminate the lineages of L. monocytogenes and the species of Listeria (L. monocytogenes, L. innocua, L. welshimeri, L. seeligeri, L. ivanovii, L. grayi, and L. rocourtiae) based on the S10-spc-alpha operon gene encoded ribosomal protein mass spectrum (S10-GERMS) proteotyping method, which relies on both genetic information (genomics) and observed MS peaks in MALDI-TOF MS (proteomics). The specific set of eight biomarkers (ribosomal proteins L24, L6, L18, L15, S11, S9, L31 type B, and S16) yielded characteristic MS patterns for the lineages of L. monocytogenes and the different species of Listeria, and led to the construction of a MS database that was successful in discriminating between these organisms in MALDI-TOF MS fingerprinting analysis followed by advanced proteotyping software Strain Solution analysis. We also confirmed the constructed database on the proteotyping software Strain Solution by using 23 Listeria strains collected from natural sources. PMID:27442502
A wavelet-based ECG delineation algorithm for 32-bit integer online processing
2011-01-01
Background Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS) or floating point algebra, which are computationally demanding. Methods This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. Results The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats) and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats). The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. Conclusions The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra. PMID:21457580
A wavelet-based ECG delineation algorithm for 32-bit integer online processing.
Di Marco, Luigi Y; Chiari, Lorenzo
2011-04-03
Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS) or floating point algebra, which are computationally demanding. This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats) and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats). The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra.
Evidence for Substance Abuse Services and Policy Research: A Systematic Review of National Databases
ERIC Educational Resources Information Center
Coffey, Rosanna M.; Levit, Katharine R.; Kassed, Cheryl A.; McLellan, A. Thomas; Chalk, Mady; Brady, Thomas M.; Vandivort-Warren, Rita
2009-01-01
We reviewed 39 national government- and nongovernment-sponsored data sets related to substance addiction policy. These data sets describe patients with substance use disorders (SUDs), treatment providers and the services they offer, and/or expenditures on treatment. Findings indicate the availability of reliable data on the prevalence of SUD and…
Gender Convergence in the American Heritage Time Use Study (AHTUS)
ERIC Educational Resources Information Center
Fisher, Kimberly; Egerton, Muriel; Gershuny, Jonathan I.; Robinson, John P.
2007-01-01
We present evidence from a new comprehensive database of harmonized national time-diary data that standardizes information on almost 40 years of daily life in America. The advantages of the diary method over other ways of calculating how time is spent are reviewed, along with its ability to generate more reliable and accurate measures of…
Meta-Analysis that Conceals More than It Reveals: Comment on Storm Et Al. (2010)
ERIC Educational Resources Information Center
Hyman, Ray
2010-01-01
Storm, Tressoldi, and Di Risio (2010) rely on meta-analyses to justify their claim that the evidence for psi is consistent and reliable. They manufacture apparent homogeneity and consistency by eliminating many outliers and combining databases whose combined effect sizes are not significantly different--even though these combined effect sizes…
Amber J. Vanden Wymelenberg; Grzegorz Sabat; Diego Martinez; Alex S. Rajangam; Tuula T. Teeri; Jill A. Gaskell; Philip J. Kersten; Daniel Cullen
2005-01-01
The white rot basidiomycete, Phanerochaete chrysosporium, employs an array of extracellular enzymes to completely degrade the major polymers of wood : cellulose, hemicellulose and lignin. Towards the identification of participating enzymes, 268 likely secreted proteins were predicted using SignalP and TargetP algorithms. To assess the reliability of secretome...
Nailing Digital Jelly to a Virtual Tree: Tracking Emerging Technologies for Learning
ERIC Educational Resources Information Center
Serim, Ferdi; Schrock, Kathy
2008-01-01
Reliable information on emerging technologies for learning is as vital as it is difficult to come by. To meet this need, the International Society for Technology in Education organized the Emerging Technologies Task Force. Its goal is to create a database of contributions from educators highlighting their use of emerging technologies to support…
Visual and Verbal Learning in a Genetic Metabolic Disorder
ERIC Educational Resources Information Center
Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.
2009-01-01
Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…
Database of Schools Enrolling Migrant Children: An Overview.
ERIC Educational Resources Information Center
Henderson, Allison
Until recently, there was no information on U.S. schools attended by migrant children and their characteristics. Migrant children and youth often were excluded from major educational studies because of the lack of a nationally reliable sampling frame of schools or districts enrolling migrant children. In an effort to fill this gap, the U.S.…
Structural composite panel performance under long-term load
Theodore L. Laufenberg
1988-01-01
Information on the performance of wood-based structural composite panels under long-term load is currently needed to permit their use in engineered assemblies and systems. A broad assessment of the time-dependent properties of panels is critical for creating databases and models of the creep-rupture phenomenon that lead to reliability-based design procedures. This...
Realist Evaluation in Wraparound: A New Approach in Social Work Evidence-Based Practice
ERIC Educational Resources Information Center
Kazi, Mansoor A. F.; Pagkos, Brian; Milch, Heidi A.
2011-01-01
Objectives: The purpose of this study was to develop a realist evaluation paradigm in social work evidence-based practice. Method: Wraparound (at Gateway-Longview Inc., New York) used a reliable outcome measure and an electronic database to systematically collect and analyze data on the interventions, the client demographics and circumstances, and…
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2011-01-01
Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.
Gorgos, Kara S; Wasylyk, Nicole T; Van Lunen, Bonnie L; Hoch, Matthew C
2014-04-01
Joint mobilizations are commonly used by clinicians to decrease pain and restore joint arthrokinematics following musculoskeletal injury. The force applied during a joint mobilization treatment is subjective to the individual clinician but may have an effect on patient outcomes. The purpose of this systematic review was to critically appraise and synthesize the studies which examined the reliability of clinicians' force application during joint mobilization. A systematic search of PubMed and EBSCO Host databases from inception to March 1, 2013 was conducted to identify studies assessing the reliability of force application during joint mobilizations. Two reviewers utilized the Quality Appraisal of Reliability Studies (QAREL) assessment tool to determine the quality of included studies. The relative reliability of the included studies was examined through intraclass correlation coefficients (ICC) to synthesize study findings. All results were collated qualitatively with a level of evidence approach. A total of seven studies met the eligibility and were included. Five studies were included that assessed inter-clinician reliability, and six studies were included that assessed intra-clinician reliability. The overall level of evidence for inter-clinician reliability was strong for poor-to-moderate reliability (ICC = -0.04 to 0.70). The overall level of evidence for intra-clinician reliability was strong for good reliability (ICC = 0.75-0.99). This systematic review indicates there is variability in force application between clinicians but individual clinicians apply forces consistently. The results of this systematic review suggest innovative instructional methods are needed to improve consistency and validate the forces applied during of joint mobilization treatments. This is particularly evident for improving the consistency of force application across clinicians. Copyright © 2014 Elsevier Ltd. All rights reserved.
A systematic review of the factor structure and reliability of the Spence Children's Anxiety Scale.
Orgilés, Mireia; Fernández-Martínez, Iván; Guillén-Riquelme, Alejandro; Espada, José P; Essau, Cecilia A
2016-01-15
The Spence Children's Anxiety Scale (SCAS) is a widely used instrument for assessing symptoms of anxiety disorders among children and adolescents. Previous studies have demonstrated its good reliability for children and adolescents from different backgrounds. However, remarkable variability in the reliability of the SCAS across studies and inconsistent results regarding its factor structure has been found. The present study aims to examine the SCAS factor structure by means of a systematic review with narrative synthesis, the mean reliability of the SCAS by means of a meta-analysis, and the influence of the moderators on the SCAS reliability. Databases employed to collect the studies included Scholar Google, PsycARTICLES, PsycINFO, Web of Science, and Scopus since 1997. Twenty-nine and 32 studies, which examined the factor structure and the internal consistency of the SCAS, respectively, were included. The SCAS was found to have strong internal consistency, influenced by different moderators. The systematic review demonstrated that the original six-factor model was supported by most studies. Factorial invariance studies (across age, gender, country) and test-retest reliability of the SCAS were not examined in this study. It is concluded that the SCAS is a reliable instrument for cross-cultural use, and it is suggested that the original six-factor model is appropriate for cross-cultural application. Copyright © 2015 Elsevier B.V. All rights reserved.
Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard
2017-04-01
Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.
NASA Astrophysics Data System (ADS)
Rahardianto, Trias; Saputra, Aditya; Gomez, Christopher
2017-07-01
Research on landslide susceptibility has evolved rapidly over the few last decades thanks to the availability of large databases. Landslide research used to be focused on discreet events but the usage of large inventory dataset has become a central pillar of landslide susceptibility, hazard, and risk assessment. Indeed, extracting meaningful information from the large database is now at the forth of geoscientific research, following the big-data research trend. Indeed, the more comprehensive information of the past landslide available in a particular area is, the better the produced map will be, in order to support the effective decision making, planning, and engineering practice. The landslide inventory data which is freely accessible online gives an opportunity for many researchers and decision makers to prevent casualties and economic loss caused by future landslides. This data is advantageous especially for areas with poor landslide historical data. Since the construction criteria of landslide inventory map and its quality evaluation remain poorly defined, the assessment of open source landslide inventory map reliability is required. The present contribution aims to assess the reliability of open-source landslide inventory data based on the particular topographical setting of the observed area in Niigata prefecture, Japan. Geographic Information System (GIS) platform and statistical approach are applied to analyze the data. Frequency ratio method is utilized to model and assess the landslide map. The outcomes of the generated model showed unsatisfactory results with AUC value of 0.603 indicate the low prediction accuracy and unreliability of the model.
Dichter, Martin Nikolaus; Schwab, Christian G G; Meyer, Gabriele; Bartholomeyczik, Sabine; Halek, Margareta
2016-02-01
For people with dementia, the concept of quality of life (Qol) reflects the disease's impact on the whole person. Thus, Qol is an increasingly used outcome measure in dementia research. This systematic review was performed to identify available dementia-specific Qol measurements and to assess the quality of linguistic validations and reliability studies of these measurements (PROSPERO 2013: CRD42014008725). The MEDLINE, CINAHL, EMBASE, PsycINFO, and Cochrane Methodology Register databases were systematically searched without any date restrictions. Forward and backward citation tracking were performed on the basis of selected articles. A total of 70 articles addressing 19 dementia-specific Qol measurements were identified; nine measurements were adapted to nonorigin countries. The quality of the linguistic validations varied from insufficient to good. Internal consistency was the most frequently tested reliability property. Most of the reliability studies lacked internal validity. Qol measurements for dementia are insufficiently linguistic validated and not well tested for reliability. None of the identified measurements can be recommended without further research. The application of international guidelines and quality criteria is strongly recommended for the performance of linguistic validations and reliability studies of dementia-specific Qol measurements. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Louise F.; Harmon, Anna C.
2015-04-09
This project was funded jointly by the National Renewable Energy Laboratory (NREL) and Oak Ridge National Laboratory (ORNL). ORNL focused on developing a full basement wall system experimental database to enable others to validate hygrothermal simulation codes. NREL focused on testing the moisture durability of practical basement wall interior insulation retrofit solutions for cold climates. The project has produced a physically credible and reliable long-term hygrothermal performance database for retrofit foundation wall insulation systems in zone 6 and 7 climates that are fully compliant with the performance criteria in the 2009 Minnesota Energy Code. These data currently span the periodmore » from November 10, 2012 through May 31, 2014 and are anticipated to be extended through November 2014. The experimental data were configured into a standard format that can be published online and that is compatible with standard commercially available spreadsheet and database software.« less
2008 Niday Perinatal Database quality audit: report of a quality assurance project.
Dunn, S; Bottomley, J; Ali, A; Walker, M
2011-12-01
This quality assurance project was designed to determine the reliability, completeness and comprehensiveness of the data entered into Niday Perinatal Database. Quality of the data was measured by comparing data re-abstracted from the patient record to the original data entered into the Niday Perinatal Database. A representative sample of hospitals in Ontario was selected and a random sample of 100 linked mother and newborn charts were audited for each site. A subset of 33 variables (representing 96 data fields) from the Niday dataset was chosen for re-abstraction. Of the data fields for which Cohen's kappa statistic or intraclass correlation coefficient (ICC) was calculated, 44% showed substantial or almost perfect agreement (beyond chance). However, about 17% showed less than 95% agreement and a kappa or ICC value of less than 60% indicating only slight, fair or moderate agreement (beyond chance). Recommendations to improve the quality of these data fields are presented.
Schröter, Pauline; Schroeder, Sascha
2017-12-01
With the Developmental Lexicon Project (DeveL), we present a large-scale study that was conducted to collect data on visual word recognition in German across the lifespan. A total of 800 children from Grades 1 to 6, as well as two groups of younger and older adults, participated in the study and completed a lexical decision and a naming task. We provide a database for 1,152 German words, comprising behavioral data from seven different stages of reading development, along with sublexical and lexical characteristics for all stimuli. The present article describes our motivation for this project, explains the methods we used to collect the data, and reports analyses on the reliability of our results. In addition, we explored developmental changes in three marker effects in psycholinguistic research: word length, word frequency, and orthographic similarity. The database is available online.
Medicinal herbs and phytochitodeztherapy in oncology.
Treskunov, Karp; Treskunova, Olga; Komarov, Boris; Goroshetchenko, Alex; Glebov, Vlad
2003-01-01
Application of clinical phytology in treatment of oncology diseases was limited by intensive development of chemical pharmaceuticals and surgery. The authors had set the task to develop the computer database for phytotherapy application. The database included full information on patient's clinical status (identified diseases, symptoms, syndromes) and applied phytotherapy treatment. Special attention was paid to the application of phyto preparations containing chitosan. The computer database contains information on 2335 patients. It supports reliable data on efficiency of phytotherapy in general and allows to evaluate the efficiency of some particular medicinal herbs and to develop efficient complex phyto preparations for treatment of specific diseases. The application of phytotherapy in treatment of oncology patients confirmed the positive effect on patient's quality of life. In conclusion it should be emphasized that the present situation of practical application of phytotherapy could be considered as unacceptable because of absence of necessary knowledge and practical experience in using phytotherapy in outpatient clinics, hospitals and medicinal centers.
Thermodynamic properties for arsenic minerals and aqueous species
Nordstrom, D. Kirk; Majzlan, Juraj; Königsberger, Erich; Bowell, Robert J.; Alpers, Charles N.; Jamieson, Heather E.; Nordstrom, D. Kirk; Majzlan, Juraj
2014-01-01
Quantitative geochemical calculations are not possible without thermodynamic databases and considerable advances in the quantity and quality of these databases have been made since the early days of Lewis and Randall (1923), Latimer (1952), and Rossini et al. (1952). Oelkers et al. (2009) wrote, “The creation of thermodynamic databases may be one of the greatest advances in the field of geochemistry of the last century.” Thermodynamic data have been used for basic research needs and for a countless variety of applications in hazardous waste management and policy making (Zhu and Anderson 2002; Nordstrom and Archer 2003; Bethke 2008; Oelkers and Schott 2009). The challenge today is to evaluate thermodynamic data for internal consistency, to reach a better consensus of the most reliable properties, to determine the degree of certainty needed for geochemical modeling, and to agree on priorities for further measurements and evaluations.
Dangers of Noncritical Use of Historical Plague Data
Roosen, Joris
2018-01-01
Researchers have published several articles using historical data sets on plague epidemics using impressive digital databases that contain thousands of recorded outbreaks across Europe over the past several centuries. Through the digitization of preexisting data sets, scholars have unprecedented access to the historical record of plague occurrences. However, although these databases offer new research opportunities, noncritical use and reproduction of preexisting data sets can also limit our understanding of how infectious diseases evolved. Many scholars have performed investigations using Jean-Noël Biraben’s data, which contains information on mentions of plague from various kinds of sources, many of which were not cited. When scholars fail to apply source criticism or do not reflect on the content of the data they use, the reliability of their results becomes highly questionable. Researchers using these databases going forward need to verify and restrict content spatially and temporally, and historians should be encouraged to compile the work.
Reliability improvements on Thales RM2 rotary Stirling coolers: analysis and methodology
NASA Astrophysics Data System (ADS)
Cauquil, J. M.; Seguineau, C.; Martin, J.-Y.; Benschop, T.
2016-05-01
The cooled IR detectors are used in a wide range of applications. Most of the time, the cryocoolers are one of the components dimensioning the lifetime of the system. The cooler reliability is thus one of its most important parameters. This parameter has to increase to answer market needs. To do this, the data for identifying the weakest element determining cooler reliability has to be collected. Yet, data collection based on field are hardly usable due to lack of informations. A method for identifying the improvement in reliability has then to be set up which can be used even without field return. This paper will describe the method followed by Thales Cryogénie SAS to reach such a result. First, a database was built from extensive expertizes of RM2 failures occurring in accelerate ageing. Failure modes have then been identified and corrective actions achieved. Besides this, a hierarchical organization of the functions of the cooler has been done with regard to the potential increase of its efficiency. Specific changes have been introduced on the functions most likely to impact efficiency. The link between efficiency and reliability will be described in this paper. The work on the two axes - weak spots for cooler reliability and efficiency - permitted us to increase in a drastic way the MTTF of the RM2 cooler. Huge improvements in RM2 reliability are actually proven by both field return and reliability monitoring. These figures will be discussed in the paper.
Evaluating information skills training in health libraries: a systematic review.
Brettle, Alison
2007-12-01
Systematic reviews have shown that there is limited evidence to demonstrate that the information literacy training health librarians provide is effective in improving clinicians' information skills or has an impact on patient care. Studies lack measures which demonstrate validity and reliability in evaluating the impact of training. To determine what measures have been used; the extent to which they are valid and reliable; to provide guidance for health librarians who wish to evaluate the impact of their information skills training. Systematic review methodology involved searching seven databases, and personal files. Studies were included if they were about information skills training, used an objective measure to assess outcomes, and occurred in a health setting. Fifty-four studies were included in the review. Most outcome measures used in the studies were not tested for the key criteria of validity and reliability. Three tested for validity and reliability are described in more detail. Selecting an appropriate measure to evaluate the impact of training is a key factor in carrying out any evaluation. This systematic review provides guidance to health librarians by highlighting measures used in various circumstances, and those that demonstrate validity and reliability.
Integral nuclear data validation using experimental spent nuclear fuel compositions
Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco; ...
2017-07-19
Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less
Zhao, Xinjie; Zeng, Zhongda; Chen, Aiming; Lu, Xin; Zhao, Chunxia; Hu, Chunxiu; Zhou, Lina; Liu, Xinyu; Wang, Xiaolin; Hou, Xiaoli; Ye, Yaorui; Xu, Guowang
2018-05-29
Identification of the metabolites is an essential step in metabolomics study to interpret regulatory mechanism of pathological and physiological processes. However, it is still a big headache in LC-MSn-based studies because of the complexity of mass spectrometry, chemical diversity of metabolites, and deficiency of standards database. In this work, a comprehensive strategy is developed for accurate and batch metabolite identification in non-targeted metabolomics studies. First, a well defined procedure was applied to generate reliable and standard LC-MS2 data including tR, MS1 and MS2 information at a standard operational procedure (SOP). An in-house database including about 2000 metabolites was constructed and used to identify the metabolites in non-targeted metabolic profiling by retention time calibration using internal standards, precursor ion alignment and ion fusion, auto-MS2 information extraction and selection, and database batch searching and scoring. As an application example, a pooled serum sample was analyzed to deliver the strategy, 202 metabolites were identified in the positive ion mode. It shows our strategy is useful for LC-MSn-based non-targeted metabolomics study.
Magnette, Amandine; Huang, Te-Din; Renzi, Francesco; Bogaerts, Pierre; Cornelis, Guy R; Glupczynski, Youri
2016-01-01
Capnocytophaga canimorsus and Capnocytophaga cynodegmi can be transmitted from dogs or cats and cause serious human infections. We aimed to evaluate the ability of matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) to identify these two Capnocytophaga species. Ninety-four C. canimorsus and 10 C. cynodegmi isolates identified by 16S rRNA gene sequencing were analyzed. Using the MALDI BioTyper database, correct identification was achieved for only 16 of 94 (17%) C. canimorsus and all 10 C. cynodegmi strains, according to the manufacturer's log score specifications. Following the establishment of a complementary homemade reference database by addition of 51 C. canimorsus and 8 C. cynodegmi mass spectra, MALDI-TOF MS provided reliable identification to the species level for 100% of the 45 blind-coded Capnocytophaga isolates tested. MALDI-TOF MS can accurately identify C. canimorsus and C. cynodegmi using an enriched database and thus constitutes a valuable diagnostic tool in the clinical laboratory. Copyright © 2016 Elsevier Inc. All rights reserved.
The Cambridge Structural Database
Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.
2016-01-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719
Integral nuclear data validation using experimental spent nuclear fuel compositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco
Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less
Extraction of land cover change information from ENVISAT-ASAR data in Chengdu Plain
NASA Astrophysics Data System (ADS)
Xu, Wenbo; Fan, Jinlong; Huang, Jianxi; Tian, Yichen; Zhang, Yong
2006-10-01
Land cover data are essential to most global change research objectives, including the assessment of current environmental conditions and the simulation of future environmental scenarios that ultimately lead to public policy development. Chinese Academy of Sciences generated a nationwide land cover database in order to carry out the quantification and spatial characterization of land use/cover changes (LUCC) in 1990s. In order to improve the reliability of the database, we will update the database anytime. But it is difficult to obtain remote sensing data to extract land cover change information in large-scale. It is hard to acquire optical remote sensing data in Chengdu plain, so the objective of this research was to evaluate multitemporal ENVISAT advanced synthetic aperture radar (ASAR) data for extracting land cover change information. Based on the fieldwork and the nationwide 1:100000 land cover database, the paper assesses several land cover changes in Chengdu plain, for example: crop to buildings, forest to buildings, and forest to bare land. The results show that ENVISAT ASAR data have great potential for the applications of extracting land cover change information.
The Cambridge Structural Database.
Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C
2016-04-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.
Benschop, Corina C G; van der Beek, Cornelis P; Meiland, Hugo C; van Gorp, Ankie G M; Westen, Antoinette A; Sijen, Titia
2011-08-01
To analyze DNA samples with very low DNA concentrations, various methods have been developed that sensitize short tandem repeat (STR) typing. Sensitized DNA typing is accompanied by stochastic amplification effects, such as allele drop-outs and drop-ins. Therefore low template (LT) DNA profiles are interpreted with care. One can either try to infer the genotype by a consensus method that uses alleles confirmed in replicate analyses, or one can use a statistical model to evaluate the strength of the evidence in a direct comparison with a known DNA profile. In this study we focused on the first strategy and we show that the procedure by which the consensus profile is assembled will affect genotyping reliability. In order to gain insight in the roles of replicate number and requested level of reproducibility, we generated six independent amplifications of samples of known donors. The LT methods included both increased cycling and enhanced capillary electrophoresis (CE) injection [1]. Consensus profiles were assembled from two to six of the replications using four methods: composite (include all alleles), n-1 (include alleles detected in all but one replicate), n/2 (include alleles detected in at least half of the replicates) and 2× (include alleles detected twice). We compared the consensus DNA profiles with the DNA profile of the known donor, studied the stochastic amplification effects and examined the effect of the consensus procedure on DNA database search results. From all these analyses we conclude that the accuracy of LT DNA typing and the efficiency of database searching improve when the number of replicates is increased and the consensus method is n/2. The most functional number of replicates within this n/2 method is four (although a replicate number of three suffices for samples showing >25% of the alleles in standard STR typing). This approach was also the optimal strategy for the analysis of 2-person mixtures, although modified search strategies may be needed to retrieve the minor component in database searches. From the database searches follows the recommendation to specifically mark LT DNA profiles when entering them into the DNA database. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Critical evaluation and thermodynamic optimization of the Iron-Rare-Earth systems
NASA Astrophysics Data System (ADS)
Konar, Bikram
Rare-Earth elements by virtue of its typical magnetic, electronic and chemical properties are gaining importance in power, electronics, telecommunications and sustainable green technology related industries. The Magnets from RE-alloys are more powerful than conventional magnets which have more longevity and high temperature workability. The dis-equilibrium in the Rare-Earth element supply and demand has increased the importance of recycling and extraction of REE's from used permanent Magnets. However, lack of the thermodynamic data on RE alloys has made it difficult to design an effective extraction and recycling process. In this regard, Computational Thermodynamic calculations can serve as a cost effective and less time consuming tool to design a waste magnet recycling process. The most common RE permanent magnet is Nd magnet (Nd 2Fe14B). Various elements such as Dy, Tb, Pr, Cu, Co, Ni, etc. are also added to increase its magnetic and mechanical properties. In order to perform reliable thermodynamic calculations for the RE recycling process, accurate thermodynamic database for RE and related alloys are required. The thermodynamic database can be developed using the so-called CALPHAD method. The database development based on the CALPHAD method is essentially the critical evaluation and optimization of all available thermodynamic and phase diagram data. As a results, one set of self-consistent thermodynamic functions for all phases in the given system can be obtained, which can reproduce all reliable thermodynamic and phase diagram data. The database containing the optimized Gibbs energy functions can be used to calculate complex chemical reactions for any high temperature processes. Typically a Gibbs energy minimization routine, such as in FactSage software, can be used to obtain the accurate thermodynamic equilibrium in multicomponent systems. As part of a large thermodynamic database development for permanent magnet recycling and Mg alloy design, all thermodynamic and phase diagram data in the literature for the fourteen Fe-RE binary systems: Fe-La, Fe-Ce, Fe-Pr, Fe-Nd, Fe-Sm, Fe-Gd, Fe-Tb, Fe-Dy, Fe-Ho, Fe-Er, Fe-Tm, Fe-Lu, Fe-Sc and Fe-Y are critically evaluated and optimized to obtain thermodynamic model parameters. The model parameters can be used to calculate phase diagrams and Gibbs energies of all phases as functions of temperature and composition. This database can be incorporated with the present thermodynamic database in FactSage software to perform complex chemical reactions and phase diagram calculations for RE magnet recycling process.
NASA Astrophysics Data System (ADS)
Park, Jun-Hyoung; Song, Mi-Young; Plasma Fundamental Technology Research Team
2015-09-01
Plasma databases are necessarily required to compute the plasma parameters and high reliable databases are closely related with accuracy enhancement of simulations. Therefore, a major concern of plasma properties collection and evaluation system is to create a sustainable and useful research environment for plasma data. The system has a commitment to provide not only numerical data but also bibliographic data (including DOI information). Originally, our collection data methods were done by manual data search. In some cases, it took a long time to find data. We will be find data more automatically and quickly than legacy methods by crawling or search engine such as Lucene.
Warnell, F; George, B; McConachie, H; Johnson, M; Hardy, R; Parr, J R
2015-09-04
(1) Describe how the Autism Spectrum Database-UK (ASD-UK) was established; (2) investigate the representativeness of the first 1000 children and families who participated, compared to those who chose not to; (3) investigate the reliability of the parent-reported Autism Spectrum Disorder (ASD) diagnoses, and present evidence about the validity of diagnoses, that is, whether children recruited actually have an ASD; (4) present evidence about the representativeness of the ASD-UK children and families, by comparing their characteristics with the first 1000 children and families from the regional Database of children with ASD living in the North East (Dasl(n)e), and children and families identified from epidemiological studies. Recruitment through a network of 50 UK child health teams and self-referral. Parents/carers with a child with ASD, aged 2-16 years, completed questionnaires about ASD and some gave professionals' reports about their children. 1000 families registered with ASD-UK in 30 months. Children of families who participated, and of the 208 who chose not to, were found to be very similar on: gender ratio, year of birth, ASD diagnosis and social deprivation score. The reliability of parent-reported ASD diagnoses of children was very high when compared with clinical reports (over 96%); no database child without ASD was identified. A comparison of gender, ASD diagnosis, age at diagnosis, school placement, learning disability, and deprivation score of children and families from ASD-UK with 1084 children and families from Dasl(n)e, and families from population studies, showed that ASD-UK families are representative of families of children with ASD overall. ASD-UK includes families providing parent-reported data about their child and family, who appear to be broadly representative of UK children with ASD. Families continue to join the databases and more than 3000 families can now be contacted by researchers about UK autism research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Identification and correction of abnormal, incomplete and mispredicted proteins in public databases.
Nagy, Alinda; Hegyi, Hédi; Farkas, Krisztina; Tordai, Hedvig; Kozma, Evelin; Bányai, László; Patthy, László
2008-08-27
Despite significant improvements in computational annotation of genomes, sequences of abnormal, incomplete or incorrectly predicted genes and proteins remain abundant in public databases. Since the majority of incomplete, abnormal or mispredicted entries are not annotated as such, these errors seriously affect the reliability of these databases. Here we describe the MisPred approach that may provide an efficient means for the quality control of databases. The current version of the MisPred approach uses five distinct routines for identifying abnormal, incomplete or mispredicted entries based on the principle that a sequence is likely to be incorrect if some of its features conflict with our current knowledge about protein-coding genes and proteins: (i) conflict between the predicted subcellular localization of proteins and the absence of the corresponding sequence signals; (ii) presence of extracellular and cytoplasmic domains and the absence of transmembrane segments; (iii) co-occurrence of extracellular and nuclear domains; (iv) violation of domain integrity; (v) chimeras encoded by two or more genes located on different chromosomes. Analyses of predicted EnsEMBL protein sequences of nine deuterostome (Homo sapiens, Mus musculus, Rattus norvegicus, Monodelphis domestica, Gallus gallus, Xenopus tropicalis, Fugu rubripes, Danio rerio and Ciona intestinalis) and two protostome species (Caenorhabditis elegans and Drosophila melanogaster) have revealed that the absence of expected signal peptides and violation of domain integrity account for the majority of mispredictions. Analyses of sequences predicted by NCBI's GNOMON annotation pipeline show that the rates of mispredictions are comparable to those of EnsEMBL. Interestingly, even the manually curated UniProtKB/Swiss-Prot dataset is contaminated with mispredicted or abnormal proteins, although to a much lesser extent than UniProtKB/TrEMBL or the EnsEMBL or GNOMON-predicted entries. MisPred works efficiently in identifying errors in predictions generated by the most reliable gene prediction tools such as the EnsEMBL and NCBI's GNOMON pipelines and also guides the correction of errors. We suggest that application of the MisPred approach will significantly improve the quality of gene predictions and the associated databases.
Normand, A C; Packeu, A; Cassagne, C; Hendrickx, M; Ranque, S; Piarroux, R
2018-05-01
Conventional dermatophyte identification is based on morphological features. However, recent studies have proposed to use the nucleotide sequences of the rRNA internal transcribed spacer (ITS) region as an identification barcode of all fungi, including dermatophytes. Several nucleotide databases are available to compare sequences and thus identify isolates; however, these databases often contain mislabeled sequences that impair sequence-based identification. We evaluated five of these databases on a clinical isolate panel. We selected 292 clinical dermatophyte strains that were prospectively subjected to an ITS2 nucleotide sequence analysis. Sequences were analyzed against the databases, and the results were compared to clusters obtained via DNA alignment of sequence segments. The DNA tree served as the identification standard throughout the study. According to the ITS2 sequence identification, the majority of strains (255/292) belonged to the genus Trichophyton , mainly T. rubrum complex ( n = 184), T. interdigitale ( n = 40), T. tonsurans ( n = 26), and T. benhamiae ( n = 5). Other genera included Microsporum (e.g., M. canis [ n = 21], M. audouinii [ n = 10], Nannizzia gypsea [ n = 3], and Epidermophyton [ n = 3]). Species-level identification of T. rubrum complex isolates was an issue. Overall, ITS DNA sequencing is a reliable tool to identify dermatophyte species given that a comprehensive and correctly labeled database is consulted. Since many inaccurate identification results exist in the DNA databases used for this study, reference databases must be verified frequently and amended in line with the current revisions of fungal taxonomy. Before describing a new species or adding a new DNA reference to the available databases, its position in the phylogenetic tree must be verified. Copyright © 2018 American Society for Microbiology.
LiDAR-derived site index in the U.S. Pacihic Northwest--challenges and opportunities
Demetrios Gatziolis
2007-01-01
Site Index (SI), a key inventory parameter, is traditionally estimated by using costly and laborious field assessments of tree height and age. The increasing availability of reliable information on stand initiation timing and extent of planted, even-aged stands maintained in digital databases suggests that information on the height of dominant trees suffices for...
2014-09-01
NoSQL Data Store Technologies John Klein, Software Engineering Institute Patrick Donohoe, Software Engineering Institute Neil Ernst...REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE NoSQL Data Store Technologies 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...distribute data 4. Data Replication – determines how a NoSQL database facilitates reliable, high performance data replication to build
Toward automatic finite element analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Perucchio, Renato; Voelcker, Herbert
1987-01-01
Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.
ERIC Educational Resources Information Center
Teixeira, Elder Sales; Greca, Ileana Maria; Freire, Olival, Jr.
2012-01-01
This work is a systematic review of studies that investigate teaching experiences applying History and Philosophy of Science (HPS) in physics classrooms, with the aim of obtaining critical and reliable information on this subject. After a careful process of selection and exclusion of studies compiled from a variety of databases, an in-depth review…
NiCd cell reliability in the mission environment
NASA Technical Reports Server (NTRS)
Denson, William K.; Klein, Glenn C.
1993-01-01
This paper summarizes an effort by Gates Aerospace Batteries (GAB) and the Reliability Analysis Center (RAC) to analyze survivability data for both General Electric and GAB NiCd cells utilized in various spacecraft. For simplicity sake, all mission environments are described as either low Earth orbital (LEO) or geosynchronous Earth orbit (GEO). 'Extreme value statistical methods' are applied to this database because of the longevity of the numerous missions while encountering relatively few failures. Every attempt was made to include all known instances of cell-induced-failures of the battery and to exclude battery-induced-failures of the cell. While this distinction may be somewhat limited due to availability of in-flight data, we have accepted the learned opinion of the specific customer contacts to ensure integrity of the common databases. This paper advances the preliminary analysis reported upon at the 1991 NASA Battery Workshop. That prior analysis was concerned with an estimated 278 million cell-hours of operation encompassing 183 satellites. The paper also cited 'no reported failures to date.' This analysis reports on 428 million cell hours of operation emcompassing 212 satellites. This analysis also reports on seven 'cell-induced-failures.'
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Colony-PCR Is a Rapid Method for DNA Amplification of Hyphomycetes
Walch, Georg; Knapp, Maria; Rainer, Georg; Peintner, Ursula
2016-01-01
Fungal pure cultures identified with both classical morphological methods and through barcoding sequences are a basic requirement for reliable reference sequences in public databases. Improved techniques for an accelerated DNA barcode reference library construction will result in considerably improved sequence databases covering a wider taxonomic range. Fast, cheap, and reliable methods for obtaining DNA sequences from fungal isolates are, therefore, a valuable tool for the scientific community. Direct colony PCR was already successfully established for yeasts, but has not been evaluated for a wide range of anamorphic soil fungi up to now, and a direct amplification protocol for hyphomycetes without tissue pre-treatment has not been published so far. Here, we present a colony PCR technique directly from fungal hyphae without previous DNA extraction or other prior manipulation. Seven hundred eighty-eight fungal strains from 48 genera were tested with a success rate of 86%. PCR success varied considerably: DNA of fungi belonging to the genera Cladosporium, Geomyces, Fusarium, and Mortierella could be amplified with high success. DNA of soil-borne yeasts was always successfully amplified. Absidia, Mucor, Trichoderma, and Penicillium isolates had noticeably lower PCR success. PMID:29376929
Chaspari, Theodora; Soldatos, Constantin; Maragos, Petros
2015-01-01
The development of ecologically valid procedures for collecting reliable and unbiased emotional data towards computer interfaces with social and affective intelligence targeting patients with mental disorders. Following its development, presented with, the Athens Emotional States Inventory (AESI) proposes the design, recording and validation of an audiovisual database for five emotional states: anger, fear, joy, sadness and neutral. The items of the AESI consist of sentences each having content indicative of the corresponding emotion. Emotional content was assessed through a survey of 40 young participants with a questionnaire following the Latin square design. The emotional sentences that were correctly identified by 85% of the participants were recorded in a soundproof room with microphones and cameras. A preliminary validation of AESI is performed through automatic emotion recognition experiments from speech. The resulting database contains 696 recorded utterances in Greek language by 20 native speakers and has a total duration of approximately 28 min. Speech classification results yield accuracy up to 75.15% for automatically recognizing the emotions in AESI. These results indicate the usefulness of our approach for collecting emotional data with reliable content, balanced across classes and with reduced environmental variability.
Automatic pattern localization across layout database and photolithography mask
NASA Astrophysics Data System (ADS)
Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter
2016-03-01
Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.
Salemi, Jason L; Salinas-Miranda, Abraham A; Wilson, Roneé E; Salihu, Hamisu M
2015-01-01
Objective To describe the use of a clinically enhanced maternal and child health (MCH) database to strengthen community-engaged research activities, and to support the sustainability of data infrastructure initiatives. Data Sources/Study Setting Population-based, longitudinal database covering over 2.3 million mother–infant dyads during a 12-year period (1998–2009) in Florida. Setting: A community-based participatory research (CBPR) project in a socioeconomically disadvantaged community in central Tampa, Florida. Study Design Case study of the use of an enhanced state database for supporting CBPR activities. Principal Findings A federal data infrastructure award resulted in the creation of an MCH database in which over 92 percent of all birth certificate records for infants born between 1998 and 2009 were linked to maternal and infant hospital encounter-level data. The population-based, longitudinal database was used to supplement data collected from focus groups and community surveys with epidemiological and health care cost data on important MCH disparity issues in the target community. Data were used to facilitate a community-driven, decision-making process in which the most important priorities for intervention were identified. Conclusions Integrating statewide all-payer, hospital-based databases into CBPR can empower underserved communities with a reliable source of health data, and it can promote the sustainability of newly developed data systems. PMID:25879276
WikiPathways: a multifaceted pathway database bridging metabolomics to other omics research.
Slenter, Denise N; Kutmon, Martina; Hanspers, Kristina; Riutta, Anders; Windsor, Jacob; Nunes, Nuno; Mélius, Jonathan; Cirillo, Elisa; Coort, Susan L; Digles, Daniela; Ehrhart, Friederike; Giesbertz, Pieter; Kalafati, Marianthi; Martens, Marvin; Miller, Ryan; Nishida, Kozo; Rieswijk, Linda; Waagmeester, Andra; Eijssen, Lars M T; Evelo, Chris T; Pico, Alexander R; Willighagen, Egon L
2018-01-04
WikiPathways (wikipathways.org) captures the collective knowledge represented in biological pathways. By providing a database in a curated, machine readable way, omics data analysis and visualization is enabled. WikiPathways and other pathway databases are used to analyze experimental data by research groups in many fields. Due to the open and collaborative nature of the WikiPathways platform, our content keeps growing and is getting more accurate, making WikiPathways a reliable and rich pathway database. Previously, however, the focus was primarily on genes and proteins, leaving many metabolites with only limited annotation. Recent curation efforts focused on improving the annotation of metabolism and metabolic pathways by associating unmapped metabolites with database identifiers and providing more detailed interaction knowledge. Here, we report the outcomes of the continued growth and curation efforts, such as a doubling of the number of annotated metabolite nodes in WikiPathways. Furthermore, we introduce an OpenAPI documentation of our web services and the FAIR (Findable, Accessible, Interoperable and Reusable) annotation of resources to increase the interoperability of the knowledge encoded in these pathways and experimental omics data. New search options, monthly downloads, more links to metabolite databases, and new portals make pathway knowledge more effortlessly accessible to individual researchers and research communities. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Development of a reference database for assessing dietary nitrate in vegetables.
Blekkenhorst, Lauren C; Prince, Richard L; Ward, Natalie C; Croft, Kevin D; Lewis, Joshua R; Devine, Amanda; Shinde, Sujata; Woodman, Richard J; Hodgson, Jonathan M; Bondonno, Catherine P
2017-08-01
Nitrate from vegetables improves vascular health with short-term intake. Whether this translates into improved long-term health outcomes has yet to be investigated. To enable reliable analysis of nitrate intake from food records, there is a strong need for a comprehensive nitrate content of vegetables database. A systematic literature search (1980-2016) was performed using Medline, Agricola and Commonwealth Agricultural Bureaux abstracts databases. The nitrate content of vegetables database contains 4237 records from 255 publications with data on 178 vegetables and 22 herbs and spices. The nitrate content of individual vegetables ranged from Chinese flat cabbage (median; range: 4240; 3004-6310 mg/kg FW) to corn (median; range: 12; 5-1091 mg/kg FW). The database was applied to estimate vegetable nitrate intake using 24-h dietary recalls (24-HDRs) and food frequency questionnaires (FFQs). Significant correlations were observed between urinary nitrate excretion and 24-HDR (r = 0.4, P = 0.013), between 24-HDR and 12 month FFQs (r = 0.5, P < 0.001) as well as two 4 week FFQs administered 8 weeks apart (r = 0.86, P < 0.001). This comprehensive nitrate database allows quantification of dietary nitrate from a large variety of vegetables. It can be applied to dietary records to explore the associations between nitrate intake and health outcomes in human studies. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Design and implementation of online automatic judging system
NASA Astrophysics Data System (ADS)
Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng
2017-06-01
For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.
The limits of protein sequence comparison?
Pearson, William R; Sierk, Michael L
2010-01-01
Modern sequence alignment algorithms are used routinely to identify homologous proteins, proteins that share a common ancestor. Homologous proteins always share similar structures and often have similar functions. Over the past 20 years, sequence comparison has become both more sensitive, largely because of profile-based methods, and more reliable, because of more accurate statistical estimates. As sequence and structure databases become larger, and comparison methods become more powerful, reliable statistical estimates will become even more important for distinguishing similarities that are due to homology from those that are due to analogy (convergence). The newest sequence alignment methods are more sensitive than older methods, but more accurate statistical estimates are needed for their full power to be realized. PMID:15919194
Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less
The Golosiiv on-line plate archive database, management and maintenance
NASA Astrophysics Data System (ADS)
Pakuliak, L.; Sergeeva, T.
2007-08-01
We intend to create online version of the database of the MAO NASU plate archive as VO-compatible structures in accordance with principles, developed by the International Virtual Observatory Alliance in order to make them available for world astronomical community. The online version of the log-book database is constructed by means of MySQL+PHP. Data management system provides a user with user interface, gives a capability of detailed traditional form-filling radial search of plates, obtaining some auxiliary sampling, the listing of each collection and permits to browse the detail descriptions of collections. The administrative tool allows database administrator the data correction, enhancement with new data sets and control of the integrity and consistence of the database as a whole. The VO-compatible database is currently constructing under the demands and in the accordance with principles of international data archives and has to be strongly generalized in order to provide a possibility of data mining by means of standard interfaces and to be the best fitted to the demands of WFPDB Group for databases of the plate catalogues. On-going enhancements of database toward the WFPDB bring the problem of the verification of data to the forefront, as it demands the high degree of data reliability. The process of data verification is practically endless and inseparable from data management owing to a diversity of data errors nature, that means to a variety of ploys of their identification and fixing. The current status of MAO NASU glass archive forces the activity in both directions simultaneously: the enhancement of log-book database with new sets of observational data as well as generalized database creation and the cross-identification between them. The VO-compatible version of the database is supplying with digitized data of plates obtained with MicroTek ScanMaker 9800 XL TMA. The scanning procedure is not total but is conducted selectively in the frames of special projects.
[A Terahertz Spectral Database Based on Browser/Server Technique].
Zhang, Zhuo-yong; Song, Yue
2015-09-01
With the solution of key scientific and technical problems and development of instrumentation, the application of terahertz technology in various fields has been paid more and more attention. Owing to the unique characteristic advantages, terahertz technology has been showing a broad future in the fields of fast, non-damaging detections, as well as many other fields. Terahertz technology combined with other complementary methods can be used to cope with many difficult practical problems which could not be solved before. One of the critical points for further development of practical terahertz detection methods depends on a good and reliable terahertz spectral database. We developed a BS (browser/server) -based terahertz spectral database recently. We designed the main structure and main functions to fulfill practical requirements. The terahertz spectral database now includes more than 240 items, and the spectral information was collected based on three sources: (1) collection and citation from some other abroad terahertz spectral databases; (2) collected from published literatures; and (3) spectral data measured in our laboratory. The present paper introduced the basic structure and fundament functions of the terahertz spectral database developed in our laboratory. One of the key functions of this THz database is calculation of optical parameters. Some optical parameters including absorption coefficient, refractive index, etc. can be calculated based on the input THz time domain spectra. The other main functions and searching methods of the browser/server-based terahertz spectral database have been discussed. The database search system can provide users convenient functions including user registration, inquiry, displaying spectral figures and molecular structures, spectral matching, etc. The THz database system provides an on-line searching function for registered users. Registered users can compare the input THz spectrum with the spectra of database, according to the obtained correlation coefficient one can perform the searching task very fast and conveniently. Our terahertz spectral database can be accessed at http://www.teralibrary.com. The proposed terahertz spectral database is based on spectral information so far, and will be improved in the future. We hope this terahertz spectral database can provide users powerful, convenient, and high efficient functions, and could promote the broader applications of terahertz technology.
Kilgus, Stephen P; Riley-Tillman, T Chris; Stichter, Janine P; Schoemann, Alexander M; Bellesheim, Katie
2016-09-01
The purpose of this investigation was to evaluate the reliability of Direct Behavior Ratings-Social Competence (DBR-SC) ratings. Participants included 60 students identified as possessing deficits in social competence, as well as their 23 classroom teachers. Teachers used DBR-SC to complete ratings of 5 student behaviors within the general education setting on a daily basis across approximately 5 months. During this time, each student was assigned to 1 of 2 intervention conditions, including the Social Competence Intervention-Adolescent (SCI-A) and a business-as-usual (BAU) intervention. Ratings were collected across 3 intervention phases, including pre-, mid-, and postintervention. Results suggested DBR-SC ratings were highly consistent across time within each student, with reliability coefficients predominantly falling in the .80 and .90 ranges. Findings further indicated such levels of reliability could be achieved with only a small number of ratings, with estimates varying between 2 and 10 data points. Group comparison analyses further suggested the reliability of DBR-SC ratings increased over time, such that student behavior became more consistent throughout the intervention period. Furthermore, analyses revealed that for 2 of the 5 DBR-SC behavior targets, the increase in reliability over time was moderated by intervention grouping, with students receiving SCI-A demonstrating greater increases in reliability relative to those in the BAU group. Limitations of the investigation as well as directions for future research are discussed herein. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Which symptom assessments and approaches are uniquely appropriate for paediatric concussion?
Gioia, G A; Schneider, J C; Vaughan, C G; Isquith, P K
2009-05-01
To (a) identify post-concussion symptom scales appropriate for children and adolescents in sports; (b) review evidence for reliability and validity; and (c) recommend future directions for scale development. Quantitative and qualitative literature review of symptom rating scales appropriate for children and adolescents aged 5 to 22 years. Literature identified via search of Medline, Ovid-Medline and PsycInfo databases; review of reference lists in identified articles; querying sports concussion specialists. 29 articles met study inclusion criteria. 5 symptom scales examined in 11 studies for ages 5-12 years and in 25 studies for ages 13-22. 10 of 11 studies for 5-12-year-olds presented validity evidence for three scales; 7 studies provided reliability evidence for two scales; 7 studies used serial administrations but no reliable change metrics. Two scales included parent-reports and one included a teacher report. 24 of 25 studies for 13-22 year-olds presented validity evidence for five measures; seven studies provided reliability evidence for four measures with 18 studies including serial administrations and two examining Reliable Change. Psychometric evidence for symptom scales is stronger for adolescents than for younger children. Most scales provide evidence of concurrent validity, discriminating concussed and non-concussed groups. Few report reliability and evidence for validity is narrow. Two measures include parent/teacher reports. Few scales examine reliable change statistics, limiting interpretability of temporal changes. Future studies are needed to fully define symptom scale psychometric properties with the greatest need in younger student-athletes.
The reliability and validity of ultrasound to quantify muscles in older adults: a systematic review
Scafoglieri, Aldo; Jager‐Wittenaar, Harriët; Hobbelen, Johannes S.M.; van der Schans, Cees P.
2017-01-01
Abstract This review evaluates the reliability and validity of ultrasound to quantify muscles in older adults. The databases PubMed, Cochrane, and Cumulative Index to Nursing and Allied Health Literature were systematically searched for studies. In 17 studies, the reliability (n = 13) and validity (n = 8) of ultrasound to quantify muscles in community‐dwelling older adults (≥60 years) or a clinical population were evaluated. Four out of 13 reliability studies investigated both intra‐rater and inter‐rater reliability. Intraclass correlation coefficient (ICC) scores for reliability ranged from −0.26 to 1.00. The highest ICC scores were found for the vastus lateralis, rectus femoris, upper arm anterior, and the trunk (ICC = 0.72 to 1.000). All included validity studies found ICC scores ranging from 0.92 to 0.999. Two studies describing the validity of ultrasound to predict lean body mass showed good validity as compared with dual‐energy X‐ray absorptiometry (r 2 = 0.92 to 0.96). This systematic review shows that ultrasound is a reliable and valid tool for the assessment of muscle size in older adults. More high‐quality research is required to confirm these findings in both clinical and healthy populations. Furthermore, ultrasound assessment of small muscles needs further evaluation. Ultrasound to predict lean body mass is feasible; however, future research is required to validate prediction equations in older adults with varying function and health. PMID:28703496
ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes
Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim
2010-01-01
Motivation: Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith–Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid™, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. Availability: The database can be accessed through http://proteinworlddb.org Contact: otto@fiocruz.br PMID:20089515
Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui
2009-01-01
The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.
SolEST database: a "one-stop shop" approach to the study of Solanaceae transcriptomes.
D'Agostino, Nunzio; Traini, Alessandra; Frusciante, Luigi; Chiusano, Maria Luisa
2009-11-30
Since no genome sequences of solanaceous plants have yet been completed, expressed sequence tag (EST) collections represent a reliable tool for broad sampling of Solanaceae transcriptomes, an attractive route for understanding Solanaceae genome functionality and a powerful reference for the structural annotation of emerging Solanaceae genome sequences. We describe the SolEST database http://biosrv.cab.unina.it/solestdb which integrates different EST datasets from both cultivated and wild Solanaceae species and from two species of the genus Coffea. Background as well as processed data contained in the database, extensively linked to external related resources, represent an invaluable source of information for these plant families. Two novel features differentiate SolEST from other resources: i) the option of accessing and then visualizing Solanaceae EST/TC alignments along the emerging tomato and potato genome sequences; ii) the opportunity to compare different Solanaceae assemblies generated by diverse research groups in the attempt to address a common complaint in the SOL community. Different databases have been established worldwide for collecting Solanaceae ESTs and are related in concept, content and utility to the one presented herein. However, the SolEST database has several distinguishing features that make it appealing for the research community and facilitates a "one-stop shop" for the study of Solanaceae transcriptomes.
GMOMETHODS: the European Union database of reference methods for GMO analysis.
Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre
2012-01-01
In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.
Torgerson, Carinna M; Quinn, Catherine; Dinov, Ivo; Liu, Zhizhong; Petrosyan, Petros; Pelphrey, Kevin; Haselgrove, Christian; Kennedy, David N; Toga, Arthur W; Van Horn, John Darrell
2015-03-01
Under the umbrella of the National Database for Clinical Trials (NDCT) related to mental illnesses, the National Database for Autism Research (NDAR) seeks to gather, curate, and make openly available neuroimaging data from NIH-funded studies of autism spectrum disorder (ASD). NDAR has recently made its database accessible through the LONI Pipeline workflow design and execution environment to enable large-scale analyses of cortical architecture and function via local, cluster, or "cloud"-based computing resources. This presents a unique opportunity to overcome many of the customary limitations to fostering biomedical neuroimaging as a science of discovery. Providing open access to primary neuroimaging data, workflow methods, and high-performance computing will increase uniformity in data collection protocols, encourage greater reliability of published data, results replication, and broaden the range of researchers now able to perform larger studies than ever before. To illustrate the use of NDAR and LONI Pipeline for performing several commonly performed neuroimaging processing steps and analyses, this paper presents example workflows useful for ASD neuroimaging researchers seeking to begin using this valuable combination of online data and computational resources. We discuss the utility of such database and workflow processing interactivity as a motivation for the sharing of additional primary data in ASD research and elsewhere.
Cutolo, Maurizio; Vanhaecke, Amber; Ruaro, Barbara; Deschepper, Ellen; Ickinger, Claudia; Melsens, Karin; Piette, Yves; Trombetta, Amelia Chiara; De Keyser, Filip; Smith, Vanessa
2018-06-06
A reliable tool to evaluate flow is paramount in systemic sclerosis (SSc). We describe herein on the one hand a systematic literature review on the reliability of laser speckle contrast analysis (LASCA) to measure the peripheral blood perfusion (PBP) in SSc and perform an additional pilot study, investigating the intra- and inter-rater reliability of LASCA. A systematic search was performed in 3 electronic databases, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. In the pilot study, 30 SSc patients and 30 healthy subjects (HS) underwent LASCA assessment. Intra-rater reliability was assessed by having a first anchor rater performing the measurements at 2 time-points and inter-rater reliability by having the anchor rater and a team of second raters performing the measurements in 15 SSc and 30 HS. The measurements were repeated with a second anchor rater in the other 15 SSc patients, as external validation. Only 1 of the 14 records of interest identified through the systematic search was included in the final analysis. In the additional pilot study: intra-class correlation coefficient (ICC) for intra-rater reliability of the first anchor rater was 0.95 in SSc and 0.93 in HS, the ICC for inter-rater reliability was 0.97 in SSc and 0.93 in HS. Intra- and inter-rater reliability of the second anchor rater was 0.78 and 0.87. The identified literature regarding the reliability of LASCA measurements reports good to excellent inter-rater agreement. This very pilot study could confirm the reliability of LASCA measurements with good to excellent inter-rater agreement and found additionally good to excellent intra-rater reliability. Furthermore, similar results were found in the external validation. Copyright © 2018. Published by Elsevier B.V.
Yee, Lydia T S
2017-01-01
Words are frequently used as stimuli in cognitive psychology experiments, for example, in recognition memory studies. In these experiments, it is often desirable to control for the words' psycholinguistic properties because differences in such properties across experimental conditions might introduce undesirable confounds. In order to avoid confounds, studies typically check to see if various affective and lexico-semantic properties are matched across experimental conditions, and so databases that contain values for these properties are needed. While word ratings for these variables exist in English and other European languages, ratings for Chinese words are not comprehensive. In particular, while ratings for single characters exist, ratings for two-character words-which often have different meanings than their constituent characters, are scarce. In this study, ratings for 292 two-character Chinese nouns were obtained from Cantonese speakers in Hong Kong. Affective variables, including valence and arousal, and lexico-semantic variables, including familiarity, concreteness, and imageability, were rated in the study. The words were selected from a film subtitle database containing word frequency information that could be extracted and listed alongside the resulting ratings. Overall, the subjective ratings showed good reliability across all rated dimensions, as well as good reliability within and between the different groups of participants who each rated a subset of the words. Moreover, several well-established relationships between the variables found consistently in other languages were also observed in this study, demonstrating that the ratings are valid. The resulting word database can be used in studies where control for the above psycholinguistic variables is critical to the research design.
Cataract influence on iris recognition performance
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Czajka, Adam; Maciejewicz, Piotr
2014-11-01
This paper presents the experimental study revealing weaker performance of the automatic iris recognition methods for cataract-affected eyes when compared to healthy eyes. There is little research on the topic, mostly incorporating scarce databases that are often deficient in images representing more than one illness. We built our own database, acquiring 1288 eye images of 37 patients of the Medical University of Warsaw. Those images represent several common ocular diseases, such as cataract, along with less ordinary conditions, such as iris pattern alterations derived from illness or eye trauma. Images were captured in near-infrared light (used in biometrics) and for selected cases also in visible light (used in ophthalmological diagnosis). Since cataract is a disorder that is most populated by samples in the database, in this paper we focus solely on this illness. To assess the extent of the performance deterioration we use three iris recognition methodologies (commercial and academic solutions) to calculate genuine match scores for healthy eyes and those influenced by cataract. Results show a significant degradation in iris recognition reliability manifesting by worsening the genuine scores in all three matchers used in this study (12% of genuine score increase for an academic matcher, up to 175% of genuine score increase obtained for an example commercial matcher). This increase in genuine scores affected the final false non-match rate in two matchers. To our best knowledge this is the only study of such kind that employs more than one iris matcher, and analyzes the iris image segmentation as a potential source of decreased reliability
The validation of an educational database for children with profound intellectual disabilities
Corten, Lieselotte; van Rensburg, Winnie; Kilian, Elizma; McKenzie, Judith; Vorster, Hein; Jelsma, Jennifer
2016-01-01
Background The Western Cape Forum for Intellectual Disability took the South African Government to court in 2010 on its failure to implement the right to education for Children with Severe and Profound Intellectual Disability. Subsequently, multidisciplinary teams were appointed by the Western Cape Education Department to deliver services to the Special Care Centres (SCCs). Initially, minimal information was available on this population. Objectives The purpose is to document the process of developing and validating a database for the collection of routine data. Method A descriptive analytical study design was used. A sample of convenience was drawn from individuals under the age of 18 years, enrolled in SCCs in the Western Cape. The team who entered and analysed the data reached consensus regarding the utility and feasibility of each item. Results Data were collected on 134 children. The omission of certain items from the database was identified. Some information was not reliable or readily available. Of the instruments identified to assess function, the classification systems were found to be reliable and useful, as were the performance scales. The WeeFIM, on the other hand, was lengthy and expensive, and was therefore discarded. Discussion and conclusions A list of items to be included was identified. Apart from an individual profile, it can be useful for service planning and monitoring, if incorporated into the central information system used to monitor the performance of all children. Without such inclusion, this most vulnerable population, despite court ruling, will not have their right to education adequately addressed. PMID:28730055
Rajagopalan, Malolan S; Khanna, Vineet K; Leiter, Yaacov; Stott, Meghan; Showalter, Timothy N; Dicker, Adam P; Lawrence, Yaacov R
2011-09-01
A wiki is a collaborative Web site, such as Wikipedia, that can be freely edited. Because of a wiki's lack of formal editorial control, we hypothesized that the content would be less complete and accurate than that of a professional peer-reviewed Web site. In this study, the coverage, accuracy, and readability of cancer information on Wikipedia were compared with those of the patient-orientated National Cancer Institute's Physician Data Query (PDQ) comprehensive cancer database. For each of 10 cancer types, medically trained personnel scored PDQ and Wikipedia articles for accuracy and presentation of controversies by using an appraisal form. Reliability was assessed by using interobserver variability and test-retest reproducibility. Readability was calculated from word and sentence length. Evaluators were able to rapidly assess articles (18 minutes/article), with a test-retest reliability of 0.71 and interobserver variability of 0.53. For both Web sites, inaccuracies were rare, less than 2% of information examined. PDQ was significantly more readable than Wikipedia: Flesch-Kincaid grade level 9.6 versus 14.1. There was no difference in depth of coverage between PDQ and Wikipedia (29.9, 34.2, respectively; maximum possible score 72). Controversial aspects of cancer care were relatively poorly discussed in both resources (2.9 and 6.1 for PDQ and Wikipedia, respectively, NS; maximum possible score 18). A planned subanalysis comparing common and uncommon cancers demonstrated no difference. Although the wiki resource had similar accuracy and depth as the professionally edited database, it was significantly less readable. Further research is required to assess how this influences patients' understanding and retention.
Studies of Physical Education in the United States Using SOFIT: A Review.
McKenzie, Thomas L; Smith, Nicole J
2017-12-01
An objective database for physical education (PE) is important for policy and practice decisions, and the System for Observing Fitness Instruction Time (SOFIT) has been identified as an appropriate surveillance tool for PE across the nation. The purpose of this review was to assess peer-reviewed studies using SOFIT to study K-12 PE in U.S. schools. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses informed the review, and 10 databases were searched for English-language articles published through 2016. A total of 704 records identifying SOFIT were located, and 137 full texts were read. Two authors reviewed full-text articles, and a data extraction tool was developed to select studies and main topics for synthesis. Twenty-nine studies that included direct observations of 12,256 PE lessons met inclusion criteria; 17 were conducted in elementary schools, 9 in secondary schools, and 3 in combined-level schools. Inconsistent reporting among studies was evident, including not all identifying the number of classes and teachers involved. All studies reported student physical activity, but fewer reported observer reliabilities (88%), lesson context (76%), teacher behavior (38%), and PE dosage (34%). The most frequently analyzed independent variables were teacher preparation (48%), lesson location (38%), and student gender (31%). SOFIT can be used reliably in diverse settings. Inconsistent reporting about study procedures and variables analyzed, however, limited comparisons among studies. Adherence to an established protocol and more consistent reporting would more fully enable the development of a viable database for PE in U.S. schools.
Au, Lewis; Turner, Natalie; Wong, Hui-Li; Field, Kathryn; Lee, Belinda; Boadle, David; Cooray, Prasad; Karikios, Deme; Kosmider, Suzanne; Lipton, Lara; Nott, Louise; Parente, Phillip; Tie, Jeanne; Tran, Ben; Wong, Rachel; Yip, Desmond; Shapiro, Jeremy; Gibbs, Peter
2018-04-01
Current efforts to understand patient management in clinical practice are largely based on clinician surveys with uncertain reliability. The TRACC (Treatment of Recurrent and Advanced Colorectal Cancer) database is a multisite registry collecting comprehensive treatment and outcome data on consecutive metastatic colorectal cancer (mCRC) patients at multiple sites across Australia. This study aims to determine the accuracy of oncologists' impressions of real-word practice by comparing clinicians' estimates to data captured by TRACC. Nineteen medical oncologists from nine hospitals contributing data to TRACC completed a 34-question survey regarding their impression of the management and outcomes of mCRC at their own practice and other hospitals contributing to the database. Responses were then compared with TRACC data to determine how closely their impressions reflected actual practice. Data on 1300 patients with mCRC were available. Median clinician estimated frequency of KRAS testing within 6 months of diagnosis was 80% (range: 20-100%); the TRACC documented rate was 43%. Clinicians generally overestimated the rates of first-line treatment, particularly in patients over 75 years. Estimate for bevacizumab in first line was 60% (35-80%) versus 49% in TRACC. Estimated rate for liver resection varied substantially (5-35%), and the estimated median (27%) was inconsistent with the TRACC rate (12%). Oncologists generally felt their practice was similar to other hospitals. Oncologists' estimates of current clinical practice varied and were discordant with the TRACC database, often with a tendency to overestimate interventions. Clinician surveys alone do not reliably capture contemporary clinical practices in mCRC. © 2017 John Wiley & Sons Australia, Ltd.
Becker, Pierre T; de Bel, Annelies; Martiny, Delphine; Ranque, Stéphane; Piarroux, Renaud; Cassagne, Carole; Detandt, Monique; Hendrickx, Marijke
2014-11-01
The identification of filamentous fungi by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) relies mainly on a robust and extensive database of reference spectra. To this end, a large in-house library containing 760 strains and representing 472 species was built and evaluated on 390 clinical isolates by comparing MALDI-TOF MS with the classical identification method based on morphological observations. The use of MALDI-TOF MS resulted in the correct identification of 95.4% of the isolates at species level, without considering LogScore values. Taking into account the Brukers' cutoff value for reliability (LogScore >1.70), 85.6% of the isolates were correctly identified. For a number of isolates, microscopic identification was limited to the genus, resulting in only 61.5% of the isolates correctly identified at species level while the correctness reached 94.6% at genus level. Using this extended in-house database, MALDI-TOF MS thus appears superior to morphology in order to obtain a robust and accurate identification of filamentous fungi. A continuous extension of the library is however necessary to further improve its reliability. Indeed, 15 isolates were still not represented while an additional three isolates were not recognized, probably because of a lack of intraspecific variability of the corresponding species in the database. © The Author 2014. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Forsythe, Stephen J; Dickins, Benjamin; Jolley, Keith A
2014-12-16
Following the association of Cronobacter spp. to several publicized fatal outbreaks in neonatal intensive care units of meningitis and necrotising enterocolitis, the World Health Organization (WHO) in 2004 requested the establishment of a molecular typing scheme to enable the international control of the organism. This paper presents the application of Next Generation Sequencing (NGS) to Cronobacter which has led to the establishment of the Cronobacter PubMLST genome and sequence definition database (http://pubmlst.org/cronobacter/) containing over 1000 isolates with metadata along with the recognition of specific clonal lineages linked to neonatal meningitis and adult infections Whole genome sequencing and multilocus sequence typing (MLST) has supports the formal recognition of the genus Cronobacter composed of seven species to replace the former single species Enterobacter sakazakii. Applying the 7-loci MLST scheme to 1007 strains revealed 298 definable sequence types, yet only C. sakazakii clonal complex 4 (CC4) was principally associated with neonatal meningitis. This clonal lineage has been confirmed using ribosomal-MLST (51-loci) and whole genome-MLST (1865 loci) to analyse 107 whole genomes via the Cronobacter PubMLST database. This database has enabled the retrospective analysis of historic cases and outbreaks following re-identification of those strains. The Cronobacter PubMLST database offers a central, open access, reliable sequence-based repository for researchers. It has the capacity to create new analysis schemes 'on the fly', and to integrate metadata (source, geographic distribution, clinical presentation). It is also expandable and adaptable to changes in taxonomy, and able to support the development of reliable detection methods of use to industry and regulatory authorities. Therefore it meets the WHO (2004) request for the establishment of a typing scheme for this emergent bacterial pathogen. Whole genome sequencing has additionally shown a range of potential virulence and environmental fitness traits which may account for the association of C. sakazakii CC4 pathogenicity, and propensity for neonatal CNS.
Nixon, Mark S.; Komogortsev, Oleg V.
2017-01-01
We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before. PMID:28575030
Friedman, Lee; Nixon, Mark S; Komogortsev, Oleg V
2017-01-01
We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before.
The relationship between cost estimates reliability and BIM adoption: SEM analysis
NASA Astrophysics Data System (ADS)
Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.
2018-02-01
This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.079<0.08, GFI=0.824, CFI=0.962>0.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.
Blais, Julie; Forth, Adelle E; Hare, Robert D
2017-06-01
The goal of the current study was to assess the interrater reliability of the Psychopathy Checklist-Revised (PCL-R) among a large sample of trained raters (N = 280). All raters completed PCL-R training at some point between 1989 and 2012 and subsequently provided complete coding for the same 6 practice cases. Overall, 3 major conclusions can be drawn from the results: (a) reliability of individual PCL-R items largely fell below any appropriate standards while the estimates for Total PCL-R scores and factor scores were good (but not excellent); (b) the cases representing individuals with high psychopathy scores showed better reliability than did the cases of individuals in the moderate to low PCL-R score range; and (c) there was a high degree of variability among raters; however, rater specific differences had no consistent effect on scoring the PCL-R. Therefore, despite low reliability estimates for individual items, Total scores and factor scores can be reliably scored among trained raters. We temper these conclusions by noting that scoring standardized videotaped case studies does not allow the rater to interact directly with the offender. Real-world PCL-R assessments typically involve a face-to-face interview and much more extensive collateral information. We offer recommendations for new web-based training procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Developing safety performance functions incorporating reliability-based risk measures.
Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek
2011-11-01
Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Taghipour, Morteza; Mohseni-Bandpei, Mohammad Ali; Behtash, Hamid; Abdollahi, Iraj; Rajabzadeh, Fatemeh; Pourahmadi, Mohammad Reza; Emami, Mahnaz
2018-04-24
Rehabilitative ultrasound (US) imaging is one of the popular methods for investigating muscle morphologic characteristics and dimensions in recent years. The reliability of this method has been investigated in different studies. As studies have been performed with different designs and quality, reported values of rehabilitative US have a wide range. The objective of this study was to systematically review the literature conducted on the reliability of rehabilitative US imaging for the assessment of deep abdominal and lumbar trunk muscle dimensions. The PubMed/MEDLINE, Scopus, Google Scholar, Science Direct, Embase, Physiotherapy Evidence, Ovid, and CINAHL databases were searched to identify original research articles conducted on the reliability of rehabilitative US imaging published from June 2007 to August 2017. The articles were qualitatively assessed; reliability data were extracted; and the methodological quality was evaluated by 2 independent reviewers. Of the 26 included studies, 16 were considered of high methodological quality. Except for 2 studies, all high-quality studies reported intraclass correlation coefficients (ICCs) for intra-rater reliability of 0.70 or greater. Also, ICCs reported for inter-rater reliability in high-quality studies were generally greater than 0.70. Among low-quality studies, reported ICCs ranged from 0.26 to 0.99 and 0.68 to 0.97 for intra- and inter-rater reliability, respectively. Also, the reported standard error of measurement and minimal detectable change for rehabilitative US were generally in an acceptable range. Generally, the results of the reviewed studies indicate that rehabilitative US imaging has good levels of both inter- and intra-rater reliability. © 2018 by the American Institute of Ultrasound in Medicine.
The SACADA database for human reliability and human performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. James Chang; Dennis Bley; Lawrence Criscione
2014-05-01
Lack of appropriate and sufficient human performance data has been identified as a key factor affecting human reliability analysis (HRA) quality especially in the estimation of human error probability (HEP). The Scenario Authoring, Characterization, and Debriefing Application (SACADA) database was developed by the U.S. Nuclear Regulatory Commission (NRC) to address this data need. An agreement between NRC and the South Texas Project Nuclear Operating Company (STPNOC) was established to support the SACADA development with aims to make the SACADA tool suitable for implementation in the nuclear power plants' operator training program to collect operator performance information. The collected data wouldmore » support the STPNOC's operator training program and be shared with the NRC for improving HRA quality. This paper discusses the SACADA data taxonomy, the theoretical foundation, the prospective data to be generated from the SACADA raw data to inform human reliability and human performance, and the considerations on the use of simulator data for HRA. Each SACADA data point consists of two information segments: context and performance results. Context is a characterization of the performance challenges to task success. The performance results are the results of performing the task. The data taxonomy uses a macrocognitive functions model for the framework. At a high level, information is classified according to the macrocognitive functions of detecting the plant abnormality, understanding the abnormality, deciding the response plan, executing the response plan, and team related aspects (i.e., communication, teamwork, and supervision). The data are expected to be useful for analyzing the relations between context, error modes and error causes in human performance.« less
Reliability and validity of procedure-based assessments in otolaryngology training.
Awad, Zaid; Hayden, Lindsay; Robson, Andrew K; Muthuswamy, Keerthini; Tolley, Neil S
2015-06-01
To investigate the reliability and construct validity of procedure-based assessment (PBA) in assessing performance and progress in otolaryngology training. Retrospective database analysis using a national electronic database. We analyzed PBAs of otolaryngology trainees in North London from core trainees (CTs) to specialty trainees (STs). The tool contains six multi-item domains: consent, planning, preparation, exposure/closure, technique, and postoperative care, rated as "satisfactory" or "development required," in addition to an overall performance rating (pS) of 1 to 4. Individual domain score, overall calculated score (cS), and number of "development-required" items were calculated for each PBA. Receiver operating characteristic analysis helped determine sensitivity and specificity. There were 3,152 otolaryngology PBAs from 46 otolaryngology trainees analyzed. PBA reliability was high (Cronbach's α 0.899), and sensitivity approached 99%. cS correlated positively with pS and level in training (rs : +0.681 and +0.324, respectively). ST had higher cS and pS than CT (93% ± 0.6 and 3.2 ± 0.03 vs. 71% ± 3.1 and 2.3 ± 0.08, respectively; P < .001). cS and pS increased from CT1 to ST8 showing construct validity (rs : +0.348 and +0.354, respectively; P < .001). The technical skill domain had the highest utilization (98% of PBAs) and was the best predictor of cS and pS (rs : +0.96 and +0.66, respectively). PBA is reliable and valid for assessing otolaryngology trainees' performance and progress at all levels. It is highly sensitive in identifying competent trainees. The tool is used in a formative and feedback capacity. The technical domain is the best predictor and should be given close attention. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Neubauer, Paul D; Hersey, Denise P; Leder, Steven B
2016-06-01
Identification of pharyngeal residue severity located in the valleculae and pyriform sinuses has always been a primary goal during fiberoptic endoscopic evaluation of swallowing (FEES). Pharyngeal residue is a clinical sign of potential prandial aspiration making an accurate description of its severity an important but difficult challenge. A reliable, validated, and generalizable pharyngeal residue severity rating scale for FEES would be beneficial. A systematic review of the published English language literature since 1995 was conducted to determine the quality of existing pharyngeal residue severity rating scales based on FEES. Databases were searched using controlled vocabulary words and synonymous free text words for topics of interest (deglutition disorders, pharyngeal residue, endoscopy, videofluoroscopy, fiberoptic technology, aspiration, etc.) and outcomes of interest (scores, scales, grades, tests, FEES, etc.). Search strategies were adjusted for syntax appropriate for each database/platform. Data sources included MEDLINE (OvidSP 1946-April Week 3 2015), Embase (OvidSP 1974-2015 April 20), Scopus (Elsevier), and the unindexed material in PubMed (NLM/NIH) were searched for relevant articles. Supplementary efforts to identify studies included checking reference lists of articles retrieved. Scales were compared using qualitative properties (sample size, severity definitions, number of raters, and raters' experience and training) and psychometric analyses (randomization, intra- and inter-rater reliability, and construct validity). Seven articles describing pharyngeal residue severity rating scales met inclusion criteria. Six of seven scales had insufficient data to support their use as evidenced by methodological weaknesses with both qualitative properties and psychometric analyses. There is a need for qualitative and psychometrically reliable, validated, and generalizable pharyngeal residue severity rating scales that are anatomically specific, image-based, and easily learned by both novice and experienced clinicians. Only the Yale Pharyngeal Residue Severity Rating Scale, an anatomically defined and image-based tool, met all qualitative and psychometric criteria necessary for a valid, reliable, and generalizable vallecula and pyriform sinus severity rating scale based on FEES.
Computer-Based and Paper-Based Measurement of Recognition Performance
1989-03-01
domains (e.g., ship silhouettes, electronic schemata, human anatomy ) to ascertain the universality of the validity and reliability results...specific graphic 4 database (e.g., ship silhouettes, human anatomy , electronic circuits, topography), con- tributes to its wide applicability. The game, then...seek implementation of FLASH and PICTURE in other content areas or subject-matter domains (e.g., ship silhouettes, electronic schemata, human anatomy ) to
Current clinical research in orthodontics: a perspective.
Baumrind, Sheldon
2006-10-01
This essay explores briefly the approach of the Craniofacial Research Instrumentation Laboratory to the systematic and rigorous investigation of the usual outcome of orthodontic treatment in the practices of experienced clinicians. CRIL's goal is to produce a shareable electronic database of reliable, valid, and representative data on clinical practice as an aid in the production of an improved environment for truly evidence-based orthodontic treatment.
Pérez-Escamilla, Beatriz; Franco-Trigo, Lucía; Moullin, Joanna C; Martínez-Martínez, Fernando; García-Corpas, José P
2015-01-01
Background Low adherence to pharmacological treatments is one of the factors associated with poor blood pressure control. Questionnaires are an indirect measurement method that is both economic and easy to use. However, questionnaires should meet specific criteria, to minimize error and ensure reproducibility of results. Numerous studies have been conducted to design questionnaires that quantify adherence to pharmacological antihypertensive treatments. Nevertheless, it is unknown whether questionnaires fulfil the minimum requirements of validity and reliability. The aim of this study was to compile validated questionnaires measuring adherence to pharmacological antihypertensive treatments that had at least one measure of validity and one measure of reliability. Methods A literature search was undertaken in PubMed, the Excerpta Medica Database (EMBASE), and the Latin American and Caribbean Health Sciences Literature database (Literatura Latino-Americana e do Caribe em Ciências da Saúde [LILACS]). References from included articles were hand-searched. The included papers were all that were published in English, French, Portuguese, and Spanish from the beginning of the database’s indexing until July 8, 2013, where a validation of a questionnaire (at least one demonstration of the validity and at least one of reliability) was performed to measure adherence to antihypertensive pharmacological treatments. Results A total of 234 potential papers were identified in the electronic database search; of these, 12 met the eligibility criteria. Within these 12 papers, six questionnaires were validated: the Morisky–Green–Levine; Brief Medication Questionnaire; Hill-Bone Compliance to High Blood Pressure Therapy Scale; Morisky Medication Adherence Scale; Treatment Adherence Questionnaire for Patients with Hypertension (TAQPH); and Martín–Bayarre–Grau. Questionnaire length ranged from four to 28 items. Internal consistency, assessed by Cronbach’s α, varied from 0.43 to 0.889. Additional statistical techniques utilized to assess the psychometric properties of the questionnaires varied greatly across studies. Conclusion At this stage, none of the six questionnaires included could be considered a gold standard. However, this revision will assist health professionals in the selection of the most appropriate tool for their individual circumstances. PMID:25926723
Leckelt, Marius; Wetzel, Eunike; Gerlach, Tanja M; Ackerman, Robert A; Miller, Joshua D; Chopik, William J; Penke, Lars; Geukes, Katharina; Küfner, Albrecht C P; Hutteman, Roos; Richter, David; Renner, Karl-Heinz; Allroggen, Marc; Brecheen, Courtney; Campbell, W Keith; Grossmann, Igor; Back, Mitja D
2018-01-01
Due to increased empirical interest in narcissism across the social sciences, there is a need for inventories that can be administered quickly while also reliably measuring both the agentic and antagonistic aspects of grandiose narcissism. In this study, we sought to validate the factor structure, provide representative descriptive data and reliability estimates, assess the reliability across the trait spectrum, and examine the nomological network of the short version of the Narcissistic Admiration and Rivalry Questionnaire (NARQ-S; Back et al., 2013). We used data from a large convenience sample (total N = 11,937) as well as data from a large representative sample (total N = 4,433) that included responses to other narcissism measures as well as related constructs, including the other Dark Triad traits, Big Five personality traits, and self-esteem. Confirmatory factor analysis and item response theory were used to validate the factor structure and estimate the reliability across the latent trait spectrum, respectively. Results suggest that the NARQ-S shows a robust factor structure and is a reliable and valid short measure of the agentic and antagonistic aspects of grandiose narcissism. We also discuss future directions and applications of the NARQ-S as a short and comprehensive measure of grandiose narcissism. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Toohey, Liam Anthony; De Noronha, Marcos; Taylor, Carolyn; Thomas, James
2015-02-01
Muscle strength measurement is a key component of physiotherapists' assessment and is frequently used as an outcome measure. A sphygmomanometer is an instrument commonly used to measure blood pressure that can be potentially used as a tool to assess isometric muscle strength. To systematically review the evidence on the reliability and validity of a sphygmomanometer for measuring isometric strength of hip muscles. A literature search was conducted across four databases. Studies were eligible if they presented data on reliability and/or validity, used a sphygmomanometer to measure isometric muscle strength of the hip region, and were peer reviewed. The individual studies were evaluated for quality using a standardized critical appraisal tool. A total of 644 articles were screened for eligibility, with five articles chosen for inclusion. The use of a sphygmomanometer to objectively assess isometric muscle strength of the hip muscles appears to be reliable with intraclass correlation coefficient values ranging from 0.66 to 0.94 in elderly and young populations. No studies were identified that have assessed the validity of a sphygmomanometer. The sphygmomanometer appears to be reliable for assessment of isometric muscle strength around the hip joint, but further research is warranted to establish its validity.
Diagnostic reliability of MMPI-2 computer-based test interpretations.
Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E
2014-09-01
Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Muzerengi, S; Rick, C; Begaj, I; Ives, N; Evison, F; Woolley, R L; Clarke, C E
2017-05-01
Hospital Episode Statistics data are used for healthcare planning and hospital reimbursements. Reliability of these data is dependent on the accuracy of individual hospitals reporting Secondary Uses Service (SUS) which includes hospitalisation. The number and coding accuracy for Parkinson's disease hospital admissions at a tertiary centre in Birmingham was assessed. Retrospective, routine-data-based study. A retrospective electronic database search for all Parkinson's disease patients admitted to the tertiary hospital over a 4-year period (2009-2013) was performed on the SUS database using International Classification of Disease codes, and on the local inpatient electronic prescription database, Prescription and Information Communications System, using medication prescriptions. Capture-recapture methods were used to estimate the number of patients and admissions missed by both databases. From the two databases, between July 2009 and June 2013, 1068 patients with Parkinson's disease accounted for 1999 admissions. During these admissions, the Parkinson's disease was coded as a primary or secondary diagnosis. Ninety-one percent of these admissions were recorded on the SUS database. Capture-recapture methods estimated that the number of Parkinson's disease patients admitted during this period was 1127 patients (95% confidence interval: 1107-1146). A supplementary search of both SUS and Prescription and Information Communications System was undertaken using the hospital numbers of these 1068 patients. This identified another 479 admissions. SUS database under-estimated Parkinson's disease admissions by 27% during the study period. The accuracy of disease coding is critical for healthcare policy planning and must be improved. If the under-reporting of Parkinson's disease admissions on the SUS database is repeated nationally, expenditure on Parkinson's disease admissions in England is under-estimated by approximately £61 million per year. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Does an Otolaryngology-Specific Database Have Added Value? A Comparative Feasibility Analysis.
Bellmunt, Angela M; Roberts, Rhonda; Lee, Walter T; Schulz, Kris; Pynnonen, Melissa A; Crowson, Matthew G; Witsell, David; Parham, Kourosh; Langman, Alan; Vambutas, Andrea; Ryan, Sheila E; Shin, Jennifer J
2016-07-01
There are multiple nationally representative databases that support epidemiologic and outcomes research, and it is unknown whether an otolaryngology-specific resource would prove indispensable or superfluous. Therefore, our objective was to determine the feasibility of analyses in the National Ambulatory Medical Care Survey (NAMCS) and National Hospital Ambulatory Medical Care Survey (NHAMCS) databases as compared with the otolaryngology-specific Creating Healthcare Excellence through Education and Research (CHEER) database. Parallel analyses in 2 data sets. Ambulatory visits in the United States. To test a fixed hypothesis that could be directly compared between data sets, we focused on a condition with expected prevalence high enough to substantiate availability in both. This query also encompassed a broad span of diagnoses to sample the breadth of available information. Specifically, we compared an assessment of suspected risk factors for sensorineural hearing loss in subjects 0 to 21 years of age, according to a predetermined protocol. We also assessed the feasibility of 6 additional diagnostic queries among all age groups. In the NAMCS/NHAMCS data set, the number of measured observations was not sufficient to support reliable numeric conclusions (percentage standard error among risk factors: 38.6-92.1). Analysis of the CHEER database demonstrated that age, sex, meningitis, and cytomegalovirus were statistically significant factors associated with pediatric sensorineural hearing loss (P < .01). Among the 6 additional diagnostic queries assessed, NAMCS/NHAMCS usage was also infeasible; the CHEER database contained 1585 to 212,521 more observations per annum. An otolaryngology-specific database has added utility when compared with already available national ambulatory databases. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.
Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela
2015-03-01
Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.
Sakai, Hiroaki; Lee, Sung Shin; Tanaka, Tsuyoshi; Numa, Hisataka; Kim, Jungsok; Kawahara, Yoshihiro; Wakimoto, Hironobu; Yang, Ching-chia; Iwamoto, Masao; Abe, Takashi; Yamada, Yuko; Muto, Akira; Inokuchi, Hachiro; Ikemura, Toshimichi; Matsumoto, Takashi; Sasaki, Takuji; Itoh, Takeshi
2013-02-01
The Rice Annotation Project Database (RAP-DB, http://rapdb.dna.affrc.go.jp/) has been providing a comprehensive set of gene annotations for the genome sequence of rice, Oryza sativa (japonica group) cv. Nipponbare. Since the first release in 2005, RAP-DB has been updated several times along with the genome assembly updates. Here, we present our newest RAP-DB based on the latest genome assembly, Os-Nipponbare-Reference-IRGSP-1.0 (IRGSP-1.0), which was released in 2011. We detected 37,869 loci by mapping transcript and protein sequences of 150 monocot species. To provide plant researchers with highly reliable and up to date rice gene annotations, we have been incorporating literature-based manually curated data, and 1,626 loci currently incorporate literature-based annotation data, including commonly used gene names or gene symbols. Transcriptional activities are shown at the nucleotide level by mapping RNA-Seq reads derived from 27 samples. We also mapped the Illumina reads of a Japanese leading japonica cultivar, Koshihikari, and a Chinese indica cultivar, Guangluai-4, to the genome and show alignments together with the single nucleotide polymorphisms (SNPs) and gene functional annotations through a newly developed browser, Short-Read Assembly Browser (S-RAB). We have developed two satellite databases, Plant Gene Family Database (PGFD) and Integrative Database of Cereal Gene Phylogeny (IDCGP), which display gene family and homologous gene relationships among diverse plant species. RAP-DB and the satellite databases offer simple and user-friendly web interfaces, enabling plant and genome researchers to access the data easily and facilitating a broad range of plant research topics.
EHME: a new word database for research in Basque language.
Acha, Joana; Laka, Itziar; Landa, Josu; Salaburu, Pello
2014-11-14
This article presents EHME, the frequency dictionary of Basque structure, an online program that enables researchers in psycholinguistics to extract word and nonword stimuli, based on a broad range of statistics concerning the properties of Basque words. The database consists of 22.7 million tokens, and properties available include morphological structure frequency and word-similarity measures, apart from classical indexes: word frequency, orthographic structure, orthographic similarity, bigram and biphone frequency, and syllable-based measures. Measures are indexed at the lemma, morpheme and word level. We include reliability and validation analysis. The application is freely available, and enables the user to extract words based on concrete statistical criteria 1 , as well as to obtain statistical characteristics from a list of words
[Health management system in outpatient follow-up of kidney transplantation patients].
Zhang, Hong; Xie, Jinliang; Yao, Hui; Liu, Ling; Tan, Jianwen; Geng, Chunmi
2014-07-01
To develop a health management system for outpatient follow-up of kidney transplant patients. Access 2010 database software was used to establish the health management system for kidney transplantation patients in Windows XP operating system. Database management and post-operation follow-up of the kidney transplantation patients were realized through 6 function modules including data input, data query, data printing, questionnaire survey, data export, and follow-up management. The system worked stably and reliably, and the data input was easy and fast. The query, the counting and printing were convenient. Health management system for patients after kidney transplantation not only reduces the work pressure of the follow-up staff, but also improves the efficiency of outpatient follow-up.
NASA Astrophysics Data System (ADS)
Löbling, L.
2017-03-01
Aluminum (Al) nucleosynthesis takes place during the asymptotic-giant-branch (AGB) phase of stellar evolution. Al abundance determinations in hot white dwarf stars provide constraints to understand this process. Precise abundance measurements require advanced non-local thermodynamic stellar-atmosphere models and reliable atomic data. In the framework of the German Astrophysical Virtual Observatory (GAVO), the Tübingen Model-Atom Database (TMAD) contains ready-to- use model atoms for elements from hydrogen to barium. A revised, elaborated Al model atom has recently been added. We present preliminary stellar-atmosphere models and emergent Al line spectra for the hot white dwarfs G191-B2B and RE 0503-289.
Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro
2016-06-01
Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Parise, Mario; Vennari, Carmela
2015-04-01
Sinkholes are definitely the most typical geohazard affecting karst territories. Even though typically their formation is related to an underground cave, and the related subterranean drainage, sinkholes can also be observed on non-soluble deposits such as alluvial and/or colluvial materials. Further, the presence of cavities excavated by man (for different purposes, and in different ages) may be at the origin of other phenomena of sinkholes, the so-called anthropogenic sinkholes, that characterize many historical centres of built-up areas. In Italy, due to the long history of the country, these latter, too, are of great importance, being those that typically involve human buildings and infrastructures, and cause damage and losses to society. As for any other geohazard, building a database through collection of information on the past events is a mandatory step to start the analyses aimed at the evaluation of susceptibility, hazard, and risk. The Institute of Research for the Hydrological Protection (IRPI) of the National Research Council of Italy (CNR) has been working in the last years at the construction of a specific chronological database on sinkholes in the whole country. In the database, the natural and anthropogenic sinkholes are treated in two different subsets, given the strong differences existing as regards both the causal and triggering factors, and the stabilization works as well. A particular care was given in the database to the precise site and date of occurrence of the events, as crucial information for assessing, respectively, the susceptibility and the hazard related to the particular phenomenon under study. As a requirement to be included in the database, a temporal reference of the sinkhole occurrence must be therefore known. Certainty in the geographical position of the event is a fundamental information to correctly locate the sinkhole, and to develop geological and morphological considerations aimed at performing a susceptibility analysis. This factor does not have to be disregarded since, especially for the most ancient events, the data from the sources may be not of high precision for a correct positioning of the sinkhole site. As a consequence, each sinkhole in the database was ranked according to the degree of certainty in the location, subdivided into three different levels. Accuracy of the date of occurrence of the sinkhole was then evaluated, and the highest accuracy was assigned when all the information required (hour, day, month and year of occurrence) were available. The temporal reference is of crucial importance in the IRPI database, since the final goal of the research project is the definition of the sinkhole hazard in Italy. In order to reach such goal, given the definition of hazard, the time of occurrence, and the most likely return time of the events have to be assessed. Overall, the aforementioned elements of the database allow to make some considerations about the reliability of the information presented, their precision, and to give the correct weight to the outcomes deriving from its analyses. Such issues are discussed in the present contribution, as crucial elements that need to be clearly defined in a scientifically-sound database. The database has reached so far about 900 events (31% natural sinkholes and 48% anthropogenic sinkholes, whilst 21% of sinkholes have an uncertain origin). It is continuously updated, and represents a good starting point for analysis of the sinkhole hazard at the national scale, aimed at increasing the level of attention by scientists, practitioners and authorities on this subtle hazard.
NASA Astrophysics Data System (ADS)
Poulon, Fanny; Ibrahim, Ali; Zanello, Marc; Pallud, Johan; Varlet, Pascale; Malouki, Fatima; Abi Lahoud, Georges; Devaux, Bertrand; Abi Haidar, Darine
2017-02-01
Eliminating time-consuming process of conventional biopsy is a practical improvement, as well as increasing the accuracy of tissue diagnoses and patient comfort. We addressed these needs by developing a multimodal nonlinear endomicroscope that allows real-time optical biopsies during surgical procedure. It will provide immediate information for diagnostic use without removal of tissue and will assist the choice of the optimal surgical strategy. This instrument will combine several means of contrast: non-linear fluorescence, second harmonic generation signal, reflectance, fluorescence lifetime and spectral analysis. Multimodality is crucial for reliable and comprehensive analysis of tissue. Parallel to the instrumental development, we currently improve our understanding of the endogeneous fluorescence signal with the different modalities that will be implemented in the stated. This endeavor will allow to create a database on the optical signature of the diseased and control brain tissues. This proceeding will present the preliminary results of this database on three types of tissues: cortex, metastasis and glioblastoma.
Direct hydrocarbon identification using AVO analysis in the Malay Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Y.C.; Yaacob, M.R.; Birkett, N.E.
1994-07-01
Esso Production Malaysia Inc. and Petronas Carigali Sdn. Bhd. have been conducting AVO (amplitude versus offset) processing and interpretation since April 1991 in an attempt to identify hydrocarbon fluids predrill. The major part of this effort was in the PM-5, PM-8, and PM-9 contract areas where an extensive exploration program is underway. To date, more than 1000 km of seismic data have been analyzed using the AVO technique, and the results were used to support the drilling of more than 50 exploration and delineation wells. Gather modeling of well data was used to calibrate and predict the presence of hydrocarbonmore » in proposed well locations. In order to generate accurate gather models, a geophysical properties and AVO database was needed, and great effort was spent in producing an accurate and complete database. This database is continuously being updated so that an experience file can be built to further improve the reliability of the AVO prediction.« less
Sivakumaran, Subathira; Huffman, Lee; Sivakumaran, Sivalingam
2018-01-01
A country-specific food composition databases is useful for assessing nutrient intake reliably in national nutrition surveys, research studies and clinical practice. The New Zealand Food Composition Database (NZFCDB) programme seeks to maintain relevant and up-to-date food records that reflect the composition of foods commonly consumed in New Zealand following Food Agricultural Organisation of the United Nations/International Network of Food Data Systems (FAO/INFOODS) guidelines. Food composition data (FCD) of up to 87 core components for approximately 600 foods have been added to NZFCDB since 2010. These foods include those identified as providing key nutrients in a 2008/09 New Zealand Adult Nutrition Survey. Nutrient data obtained by analysis of composite samples or are calculated from analytical data. Currently >2500 foods in 22 food groups are freely available in various NZFCDB output products on the website: www.foodcomposition.co.nz. NZFCDB is the main source of FCD for estimating nutrient intake in New Zealand nutrition surveys. Copyright © 2016 Elsevier Ltd. All rights reserved.
Proposal for a unified selection to medical residency programs.
Toffoli, Sônia Ferreira Lopes; Ferreira Filho, Olavo Franco; Andrade, Dalton Francisco de
2013-01-01
This paper proposes the unification of entrance exams to medical residency programs (MRP) in Brazil. Problems related to MRP and its interface with public health problems in Brazil are highlighted and how this proposal are able to help solving these problems. The proposal is to create a database to be applied in MRP unified exams. Some advantages of using the Item Response Theory (IRT) in this database are highlighted. The MRP entrance exams are developed and applied decentralized where each school is responsible for its examination. These exams quality are questionable. Reviews about items quality, validity and reliability of appliances are not common disclosed. Evaluation is important in every education system bringing on required changes and control of teaching and learning. The proposal of MRP entrance exams unification, besides offering high quality exams to institutions participants, could be as an extra source to rate medical school and cause improvements, provide studies with a database and allow a regional mobility. Copyright © 2013 Elsevier Editora Ltda. All rights reserved.
Scientific information repository assisting reflectance spectrometry in legal medicine.
Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W
2012-06-01
Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.
Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2017-07-11
The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.
Geospatial database for heritage building conservation
NASA Astrophysics Data System (ADS)
Basir, W. N. F. W. A.; Setan, H.; Majid, Z.; Chong, A.
2014-02-01
Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Rosseel, Thomas M; Field, Kevin G
Life extensions of nuclear power plants to 60 and potentially 80 years of operation have renewed interest in long-term material degradation. One material being considered is concrete with a particular focus on radiation-induced effects. Based on the projected neutron fluence (E > 0.1 MeV) values in the concrete biological shields of the US PWR fleet and the available data on radiation effects on concrete, some decrease in mechanical properties of concrete cannot be ruled out during extended operation beyond 60 years. An expansion of the irradiated concrete database and a reliable determination of relevant neutron fluence energy cutoff value aremore » necessary to assure reliable risk assessment for NPPs extended operation.« less
NASA Technical Reports Server (NTRS)
VanHeel, Nancy; Pettit, Janet; Rice, Barbara; Smith, Scott M.
2003-01-01
Food and nutrient databases are populated with data obtained from a variety of sources including USDA Reference Tables, scientific journals, food manufacturers and foreign food tables. The food and nutrient database maintained by the Nutrition Coordinating Center (NCC) at the University of Minnesota is continually updated with current nutrient data and continues to be expanded with additional nutrient fields to meet diverse research endeavors. Data are strictly evaluated for reliability and relevance before incorporation into the database; however, the values are obtained from various sources and food samples rather than from direct chemical analysis of specific foods. Precise nutrient values for specific foods are essential to the nutrition program at the National Aeronautics and Space Administration (NASA). Specific foods to be included in the menus of astronauts are chemically analyzed at the Johnson Space Center for selected nutrients. A request from NASA for a method to enter the chemically analyzed nutrient values for these space flight food items into the Nutrition Data System for Research (NDS-R) software resulted in modification of the database and interview system for use by NASA, with further modification to extend the method for related uses by more typical research studies.
Cameron, M; Perry, J; Middleton, J R; Chaffer, M; Lewis, J; Keefe, G P
2018-01-01
This study evaluated MALDI-TOF mass spectrometry and a custom reference spectra expanded database for the identification of bovine-associated coagulase-negative staphylococci (CNS). A total of 861 CNS isolates were used in the study, covering 21 different CNS species. The majority of the isolates were previously identified by rpoB gene sequencing (n = 804) and the remainder were identified by sequencing of hsp60 (n = 56) and tuf (n = 1). The genotypic identification was considered the gold standard identification. Using a direct transfer protocol and the existing commercial database, MALDI-TOF mass spectrometry showed a typeability of 96.5% (831/861) and an accuracy of 99.2% (824/831). Using a custom reference spectra expanded database, which included an additional 13 in-house created reference spectra, isolates were identified by MALDI-TOF mass spectrometry with 99.2% (854/861) typeability and 99.4% (849/854) accuracy. Overall, MALDI-TOF mass spectrometry using the direct transfer method was shown to be a highly reliable tool for the identification of bovine-associated CNS. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
High-integrity databases for helicopter operations
NASA Astrophysics Data System (ADS)
Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg
2009-05-01
Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.
Protein structure database search and evolutionary classification.
Yang, Jinn-Moon; Tung, Chi-Hua
2006-01-01
As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].
Peptide Identification by Database Search of Mixture Tandem Mass Spectra*
Wang, Jian; Bourne, Philip E.; Bandeira, Nuno
2011-01-01
In high-throughput proteomics the development of computational methods and novel experimental strategies often rely on each other. In certain areas, mass spectrometry methods for data acquisition are ahead of computational methods to interpret the resulting tandem mass spectra. Particularly, although there are numerous situations in which a mixture tandem mass spectrum can contain fragment ions from two or more peptides, nearly all database search tools still make the assumption that each tandem mass spectrum comes from one peptide. Common examples include mixture spectra from co-eluting peptides in complex samples, spectra generated from data-independent acquisition methods, and spectra from peptides with complex post-translational modifications. We propose a new database search tool (MixDB) that is able to identify mixture tandem mass spectra from more than one peptide. We show that peptides can be reliably identified with up to 95% accuracy from mixture spectra while considering only a 0.01% of all possible peptide pairs (four orders of magnitude speedup). Comparison with current database search methods indicates that our approach has better or comparable sensitivity and precision at identifying single-peptide spectra while simultaneously being able to identify 38% more peptides from mixture spectra at significantly higher precision. PMID:21862760
Toxicological database of soil and derived products (BDT).
Uricchio, Vito Felice
2008-01-01
The Toxicological database of soil and derived products is a project firstly proposed by the Regional Environmental Authority of Apulia. Such a project aims to provide comprehensive and updated information on the regional environmental characteristics, on the pollution state of the regional soil, on the main pollutants and on the reclaim techniques to be used in case of both non-point (agricultural activities) and point (industrial activities) sources of pollution. The project's focus is on the soil pollution because of the fundamental role played by the soil in supporting the biological cycle. Furthermore, the reasons for the project are related both to the reduction of human health risks due to toxic substances ingestion (these substances are present in some ring of the eating chain), and to the recognition of the importance of the groundwater quality safety (primary source of fresh water in many Mediterranean Regions). The essential requirements of a data entry are the following: speed and simplicity of the data entry; reliability and stability of the database structures; speed, easiness and pliability of the queries. Free consultation of the database represents one of the most remarkable advantages coming from the use of an "open" system.
Using wind plant data to increase reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Valerie A.; Ogilvie, Alistair B.; McKenney, Bridget L.
2011-01-01
Operators interested in improving reliability should begin with a focus on the performance of the wind plant as a whole. To then understand the factors which drive individual turbine performance, which together comprise the plant performance, it is necessary to track a number of key indicators. Analysis of these key indicators can reveal the type, frequency, and cause of failures and will also identify their contributions to overall plant performance. The ideal approach to using data to drive good decisions includes first determining which critical decisions can be based on data. When those required decisions are understood, then the analysismore » required to inform those decisions can be identified, and finally the data to be collected in support of those analyses can be determined. Once equipped with high-quality data and analysis capabilities, the key steps to data-based decision making for reliability improvements are to isolate possible improvements, select the improvements with largest return on investment (ROI), implement the selected improvements, and finally to track their impact.« less
Dainer-Best, Justin; Lee, Hae Yeon; Shumake, Jason D; Yeager, David S; Beevers, Christopher G
2018-06-07
Although the self-referent encoding task (SRET) is commonly used to measure self-referent cognition in depression, many different SRET metrics can be obtained. The current study used best subsets regression with cross-validation and independent test samples to identify the SRET metrics most reliably associated with depression symptoms in three large samples: a college student sample (n = 572), a sample of adults from Amazon Mechanical Turk (n = 293), and an adolescent sample from a school field study (n = 408). Across all 3 samples, SRET metrics associated most strongly with depression severity included number of words endorsed as self-descriptive and rate of accumulation of information required to decide whether adjectives were self-descriptive (i.e., drift rate). These metrics had strong intratask and split-half reliability and high test-retest reliability across a 1-week period. Recall of SRET stimuli and traditional reaction time (RT) metrics were not robustly associated with depression severity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Using Solid State Drives as a Mid-Tier Cache in Enterprise Database OLTP Applications
NASA Astrophysics Data System (ADS)
Khessib, Badriddine M.; Vaid, Kushagra; Sankar, Sriram; Zhang, Chengliang
When originally introduced, flash based solid state drives (SSD) exhibited a very high random read throughput with low sub-millisecond latencies. However, in addition to their steep prices, SSDs suffered from slow write rates and reliability concerns related to cell wear. For these reasons, they were relegated to a niche status in the consumer and personal computer market. Since then, several architectural enhancements have been introduced that led to a substantial increase in random write operations as well as a reasonable improvement in reliability. From a purely performance point of view, these high I/O rates and improved reliability make the SSDs an ideal choice for enterprise On-Line Transaction Processing (OLTP) applications. However, from a price/performance point of view, the case for SSDs may not be clear. Enterprise class SSD Price/GB, continues to be at least 10x higher than conventional magnetic hard disk drives (HDD) despite considerable drop in Flash chip prices.
Reliability and agreement in student ratings of the class environment.
Nelson, Peter M; Christ, Theodore J
2016-09-01
The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were evaluated as each index provides important information for different interpretations and uses of student rating scale data. Data for 84 classes across 29 teachers in a suburban middle school were sampled to derive reliability and agreement indices for the REACT subscales across 4 class sizes: 25, 20, 15, and 10. All participating teachers were White and a larger number of 6th-grade classes were included (42%) relative to 7th- (33%) or 8th- (23%) grade classes. Teachers were responsible for a variety of content areas, including language arts (26%), science (26%), math (20%), social studies (19%), communications (6%), and Spanish (3%). Coefficient alpha estimates were generally high across all subscales and class sizes (α = .70-.95); class-mean estimates were greatly impacted by the number of students sampled from each class, with class-level reliability values generally falling below .70 when class size was reduced from 25 to 20. Further, within-class student agreement varied widely across the REACT subscales (mean agreement = .41-.80). Although coefficient alpha and test-retest reliability are commonly reported in research with student rating scales, class-level reliability and agreement are not. The observed differences across coefficient alpha, class-level reliability, and agreement indices provide evidence for evaluating students' ratings of the class environment according to their intended use (e.g., differentiating between classes, class-level instructional decisions). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Jensen, Christian Gaden; Niclasen, Janni; Vangkilde, Signe Allerup; Petersen, Anders; Hasselbalch, Steen Gregers
2016-05-01
The Mindful Attention Awareness Scale (MAAS) measures perceived degree of inattentiveness in different contexts and is often used as a reversed indicator of mindfulness. MAAS is hypothesized to reflect a psychological trait or disposition when used outside attentional training contexts, but the long-term test-retest reliability of MAAS scores is virtually untested. It is unknown whether MAAS predicts psychological health after controlling for standardized socioeconomic status classifications. First, MAAS translated to Danish was validated psychometrically within a randomly invited healthy adult community sample (N = 490). Factor analysis confirmed that MAAS scores quantified a unifactorial construct of excellent composite reliability and consistent convergent validity. Structural equation modeling revealed that MAAS scores contributed independently to predicting psychological distress and mental health, after controlling for age, gender, income, socioeconomic occupational class, stressful life events, and social desirability (β = 0.32-.42, ps < .001). Second, MAAS scores showed satisfactory short-term test-retest reliability in 100 retested healthy university students. Finally, MAAS sample mean scores as well as individuals' scores demonstrated satisfactory test-retest reliability across a 6 months interval in the adult community (retested N = 407), intraclass correlations ≥ .74. MAAS scores displayed significantly stronger long-term test-retest reliability than scores measuring psychological distress (z = 2.78, p = .005). Test-retest reliability estimates did not differ within demographic and socioeconomic strata. Scores on the Danish MAAS were psychometrically validated in healthy adults. MAAS's inattentiveness scores reflected a unidimensional construct, long-term reliable disposition, and a factor of independent significance for predicting psychological health. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Stovall, Bradley A; Kumar, Shrawan
2010-11-01
The objective of this review is to establish the current state of knowledge on the reliability of clinical assessment of asymmetry in the lumbar spine and pelvis. To search the literature, the authors consulted the databases of MEDLINE, CINAHL, AMED, MANTIS, Academic Search Complete, and Web of Knowledge using different combinations of the following keywords: palpation, asymmetry, inter or intraexaminer reliability, tissue texture, assessment, and anatomic landmark. Of the 23 studies identified, 14 did not meet the inclusion criteria and were excluded. The quality and methods of studies investigating the reliability of bony anatomic landmark asymmetry assessment are variable. The κ statistic ranges without training for interexaminer reliability were as follows: anterior superior iliac spine (ASIS), -0.01 to 0.19; posterior superior iliac spine (PSIS), 0.04 to 0.15; inferior lateral angle, transverse plane (ILA-A/P), -0.03 to 0.11; inferior lateral angles, coronal plane (ILA-S/I), -0.01 to 0.08; sacral sulcus (SS), -0.4 to 0.37; lumbar spine transverse processes L1 through L5, 0.04 to 0.17. The corresponding ranges for intraexaminer reliability were higher for all associated landmarks: ASIS, 0.19 to 0.4; PSIS, 0.13 to 0.49; ILA-A/P, 0.1 to 0.2; ILA-S/I, 0.03 to 0.21; SS, 0.24 to 0.28; lumbar spine transverse processes L1 through L5, not applicable. Further research is needed to better understand the reliability of asymmetry assessment methods in manipulative medicine.
Stovall, Bradley A.; Kumar, Shrawan
2011-01-01
The objective of this review is to establish the current state of knowledge on the reliability of clinical assessment of asymmetry in the lumbar spine and pelvis. To search the literature, the authors consulted the databases of MEDLINE, CINAHL, AMED, MANTIS, Academic Search Complete, and Web of Knowledge using different combinations of the following keywords: palpation, asymmetry, inter- or intraex-aminer reliability, tissue texture, assessment, and anatomic landmark. Of the 23 studies identified, 14 did not meet the inclusion criteria and were excluded. The quality and methods of studies investigating the reliability of bony anatomic landmark asymmetry assessment are variable. The κ statistic ranges without training for interexaminer reliability were as follows: anterior superior iliac spine (ASIS), −0.01 to 0.19; posterior superior iliac spine (PSIS), 0.04 to 0.15; inferior lateral angle, transverse plane (ILA-A/P), −0.03 to 0.11; inferior lateral angles, coronal plane (ILA-S/I), −0.01 to 0.08; sacral sulcus (SS), −0.4 to 0.37; lumbar spine transverse processes L1 through L5, 0.04 to 0.17. The corresponding ranges for intraexaminer reliability were higher for all associated landmarks: ASIS, 0.19 to 0.4; PSIS, 0.13 to 0.49; ILA-A/P, 0.1 to 0.2; ILA-S/I, 0.03 to 0.21; SS, 0.24 to 0.28; lumbar spine transverse processes L1 through L5, not applicable. Further research is needed to better understand the reliability of asymmetry assessment methods in manipulative medicine. PMID:21135198
Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf
2012-08-31
Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.
Validity and Reliability of the Upper Extremity Work Demands Scale.
Jacobs, Nora W; Berduszek, Redmar J; Dijkstra, Pieter U; van der Sluis, Corry K
2017-12-01
Purpose To evaluate validity and reliability of the upper extremity work demands (UEWD) scale. Methods Participants from different levels of physical work demands, based on the Dictionary of Occupational Titles categories, were included. A historical database of 74 workers was added for factor analysis. Criterion validity was evaluated by comparing observed and self-reported UEWD scores. To assess structural validity, a factor analysis was executed. For reliability, the difference between two self-reported UEWD scores, the smallest detectable change (SDC), test-retest reliability and internal consistency were determined. Results Fifty-four participants were observed at work and 51 of them filled in the UEWD twice with a mean interval of 16.6 days (SD 3.3, range = 10-25 days). Criterion validity of the UEWD scale was moderate (r = .44, p = .001). Factor analysis revealed that 'force and posture' and 'repetition' subscales could be distinguished with Cronbach's alpha of .79 and .84, respectively. Reliability was good; there was no significant difference between repeated measurements. An SDC of 5.0 was found. Test-retest reliability was good (intraclass correlation coefficient for agreement = .84) and all item-total correlations were >.30. There were two pairs of highly related items. Conclusion Reliability of the UEWD scale was good, but criterion validity was moderate. Based on current results, a modified UEWD scale (2 items removed, 1 item reworded, divided into 2 subscales) was proposed. Since observation appeared to be an inappropriate gold standard, we advise to investigate other types of validity, such as construct validity, in further research.
2012-01-01
Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557
Jennifer C. Jenkins; Richard A. Birdsey
2000-01-01
As interest grows in the role of forest growth in the carbon cycle, and as simulation models are applied to predict future forest productivity at large spatial scales, the need for reliable and field-based data for evaluation of model estimates is clear. We created estimates of potential forest biomass and annual aboveground production for the Chesapeake Bay watershed...
Creep and creep-recovery of a thermoplastic resin and composite
NASA Technical Reports Server (NTRS)
Hiel, Clem
1988-01-01
The database on advanced thermoplastic composites, which is currently available to industry, contains little data on the creep and viscoelastic behavior. This behavior is nevertheless considered important, particularly for extended-service reliability in structural applications. The creep deformation of a specific thermoplastic resin and composite is reviewed. The problem to relate the data obtained on the resin to the data obtained on the composite is discussed.
2016-03-01
Performance Metrics University of Waterloo Permanganate Treatment of an Emplaced DNAPL Source (Thomson et al., 2007) Table 5.6 Remediation Performance Data... permanganate vs. peroxide/Fenton’s for chemical oxidation). Poorer performance was generally observed when the Total CVOC was the contaminant metric...using a soluble carbon substrate (lactate), chemical oxidation using Fenton’s reagent, and chemical oxidation using potassium permanganate . At
O'Brien, Daniel Tumminelli; Montgomery, Barrett W
2015-03-01
Much research has focused on physical disorder in urban neighborhoods as evidence that the community does not maintain local norms and spaces. Little attention has been paid to the opposite: indicators of proactive investment in the neighborhood's upkeep. This manuscript presents a methodology that translates a database of approved building permits into an ecometric of investment by community members, establishing basic content, criteria for reliability, and construct validity. A database from Boston, MA contained 150,493 permits spanning 2.5 years, each record including the property to be modified, permit type, and date issued. Investment was operationalized as the proportion of properties in a census block group that underwent an addition or renovation, excluding larger developments involving the demolition or construction of a building. The reliability analysis found that robust measures could be generated every 6 months, and that longitudinal analysis could differentiate between trajectories across neighborhoods. The validity analysis supported two hypotheses: investment was best predicted by homeownership and median income; and maintained an independent relationship with measures of physical disorder despite controlling for demographics, implying that it captures the other end of a spectrum of neighborhood maintenance. Possible uses for the measure in research and policy are discussed.
Measuring Physical Activity in Pregnancy Using Questionnaires: A Meta-Analysis
Schuster, Snježana; Šklempe Kokić, Iva; Sindik, Joško
2016-09-01
Physical activity (PA) during normal pregnancy has various positive effects on pregnant women’s health. Determination of the relationship between PA and health outcomes requires accurate measurement of PA in pregnant women. The purpose of this review is to provide a summary of valid and reliable PA questionnaires for pregnant women. During 2013, Pubmed, OvidSP and Web of Science databases were searched for trials on measurement properties of PA questionnaires for pregnant population. Six studies and four questionnaires met the inclusion criteria: Pregnancy Physical Activity Questionnaire, Modified Kaiser Physical Activity Survey, Short Pregnancy Leisure Time Physical Activity Questionnaire and Third Pregnancy Infection and Nutrition Study Physical Activity Questionnaire. Assessment of validity and reliability was performed using correlations of the scores in these questionnaires with objective measures and subjective measures (self-report) of PA, as well as test-retest reliability coefficients. Sample sizes included in analysis varied from 45 to 177 subjects. The best validity and reliability characteristics (together with effect sizes) were identified for the Modified Kaiser Physical Activity Survey and Pregnancy Physical Activity Questionnaire (French, Vietnamese, standard). In conclusion, assessment of PA during pregnancy remains a challenging and complex task. Questionnaires are a simple and effective, yet limited tool for assessing PA.
Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, H.; Keller, J.; Guo, Y.
2013-04-01
Gearboxes in wind turbines have not been achieving their expected design life even though they commonly meet or exceed the design criteria specified in current design standards. One of the basic premises of the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) is that the low gearbox reliability results from the absence of critical elements in the design process or insufficient design tools. Key goals of the GRC are to improve design approaches and analysis tools and to recommend practices and test methods resulting in improved design standards for wind turbine gearboxes that lower the cost of energy (COE)more » through improved reliability. The GRC uses a combined gearbox testing, modeling and analysis approach, along with a database of information from gearbox failures collected from overhauls and investigation of gearbox condition monitoring techniques to improve wind turbine operations and maintenance practices. Testing of Gearbox 2 (GB2) using the two-speed turbine controller that has been used in prior testing. This test series will investigate non-torque loads, high-speed shaft misalignment, and reproduction of field conditions in the dynamometer. This test series will also include vibration testing using an eddy-current brake on the gearbox's high speed shaft.« less
Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2016-02-01
This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less
Reliability of Computerized Neurocognitive Tests for Concussion Assessment: A Meta-Analysis.
Farnsworth, James L; Dargo, Lucas; Ragan, Brian G; Kang, Minsoo
2017-09-01
Although widely used, computerized neurocognitive tests (CNTs) have been criticized because of low reliability and poor sensitivity. A systematic review was published summarizing the reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) scores; however, this was limited to a single CNT. Expansion of the previous review to include additional CNTs and a meta-analysis is needed. Therefore, our purpose was to analyze reliability data for CNTs using meta-analysis and examine moderating factors that may influence reliability. A systematic literature search (key terms: reliability, computerized neurocognitive test, concussion) of electronic databases (MEDLINE, PubMed, Google Scholar, and SPORTDiscus) was conducted to identify relevant studies. Studies were included if they met all of the following criteria: used a test-retest design, involved at least 1 CNT, provided sufficient statistical data to allow for effect-size calculation, and were published in English. Two independent reviewers investigated each article to assess inclusion criteria. Eighteen studies involving 2674 participants were retained. Intraclass correlation coefficients were extracted to calculate effect sizes and determine overall reliability. The Fisher Z transformation adjusted for sampling error associated with averaging correlations. Moderator analyses were conducted to evaluate the effects of the length of the test-retest interval, intraclass correlation coefficient model selection, participant demographics, and study design on reliability. Heterogeneity was evaluated using the Cochran Q statistic. The proportion of acceptable outcomes was greatest for the Axon Sports CogState Test (75%) and lowest for the ImPACT (25%). Moderator analyses indicated that the type of intraclass correlation coefficient model used significantly influenced effect-size estimates, accounting for 17% of the variation in reliability. The Axon Sports CogState Test, which has a higher proportion of acceptable outcomes and shorter test duration relative to other CNTs, may be a reliable option; however, future studies are needed to compare the diagnostic accuracy of these instruments.
Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Desmeules, François
2016-12-01
Clinicians often rely on physical examination tests to guide them in the diagnostic process of knee disorders. However, reliability of these tests is often overlooked and may influence the consistency of results and overall diagnostic validity. Therefore, the objective of this study was to systematically review evidence on the reliability of physical examination tests for the diagnosis of knee disorders. A structured literature search was conducted in databases up to January 2016. Included studies needed to report reliability measures of at least one physical test for any knee disorder. Methodological quality was evaluated using the QAREL checklist. A qualitative synthesis of the evidence was performed. Thirty-three studies were included with a mean QAREL score of 5.5 ± 0.5. Based on low to moderate quality evidence, the Thessaly test for meniscal injuries reached moderate inter-rater reliability (k = 0.54). Based on moderate to excellent quality evidence, the Lachman for anterior cruciate ligament injuries reached moderate to excellent inter-rater reliability (k = 0.42 to 0.81). Based on low to moderate quality evidence, the Tibiofemoral Crepitus, Joint Line and Patellofemoral Pain/Tenderness, Bony Enlargement and Joint Pain on Movement tests for knee osteoarthritis reached fair to excellent inter-rater reliability (k = 0.29 to 0.93). Based on low to moderate quality evidence, the Lateral Glide, Lateral Tilt, Lateral Pull and Quality of Movement tests for patellofemoral pain reached moderate to good inter-rater reliability (k = 0.49 to 0.73). Many physical tests appear to reach good inter-rater reliability, but this is based on low-quality and conflicting evidence. High-quality research is required to evaluate the reliability of knee physical examination tests. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rubio-Ochoa, J; Benítez-Martínez, J; Lluch, E; Santacruz-Zaragozá, S; Gómez-Contreras, P; Cook, C E
2016-02-01
It has been suggested that differential diagnosis of headaches should consist of a robust subjective examination and a detailed physical examination of the cervical spine. Cervicogenic headache (CGH) is a form of headache that involves referred pain from the neck. To our knowledge, no studies have summarized the reliability and diagnostic accuracy of physical examination tests for CGH. The aim of this study was to summarize the reliability and diagnostic accuracy of physical examination tests used to diagnose CGH. A systematic review following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines was performed in four electronic databases (MEDLINE, Web of Science, Embase and Scopus). Full text reports concerning physical tests for the diagnosis of CGH which reported the clinometric properties for assessment of CGH, were included and screened for methodological quality. Quality Appraisal for Reliability Studies (QAREL) and Quality Assessment of Studies of Diagnostic Accuracy (QUADAS-2) scores were completed to assess article quality. Eight articles were retrieved for quality assessment and data extraction. Studies investigating diagnostic reliability of physical examination tests for CGH scored poorer on methodological quality (higher risk of bias) than those of diagnostic accuracy. There is sufficient evidence showing high levels of reliability and diagnostic accuracy of the selected physical examination tests for the diagnosis of CGH. The cervical flexion-rotation test (CFRT) exhibited both the highest reliability and the strongest diagnostic accuracy for the diagnosis of CGH. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lagarde, Marloes L J; Kamalski, Digna M A; van den Engel-Hoek, Lenie
2016-02-01
To systematically review the available evidence for the reliability and validity of cervical auscultation in diagnosing the several aspects of dysphagia in adults and children suffering from dysphagia. Medline (PubMed), Embase and the Cochrane Library databases. The systematic review was carried out applying the steps of the PRISMA-statement. The methodological quality of the included studies were evaluated using the Dutch 'Cochrane checklist for diagnostic accuracy studies'. A total of 90 articles were identified through the search strategy, and after applying the inclusion and exclusion criteria, six articles were included in this review. In the six studies, 197 patients were assessed with cervical auscultation. Two of the six articles were considered to be of 'good' quality and three studies were of 'moderate' quality. One article was excluded because of a 'poor' methodological quality. Sensitivity ranges from 23%-94% and specificity ranges from 50%-74%. Inter-rater reliability was 'poor' or 'fair' in all studies. The intra-rater reliability shows a wide variance among speech language therapists. In this systematic review, conflicting evidence is found for the validity of cervical auscultation. The reliability of cervical auscultation is insufficient when used as a stand-alone tool in the diagnosis of dysphagia in adults. There is no available evidence for the validity and reliability of cervical auscultation in children. Cervical auscultation should not be used as a stand-alone instrument to diagnose dysphagia. © The Author(s) 2015.
Omari, Taher I.; Savilampi, Johanna; Kokkinn, Karmen; Schar, Mistyka; Lamvik, Kristin; Doeltgen, Sebastian; Cock, Charles
2016-01-01
Purpose. We evaluated the intra- and interrater agreement and test-retest reliability of analyst derivation of swallow function variables based on repeated high resolution manometry with impedance measurements. Methods. Five subjects swallowed 10 × 10 mL saline on two occasions one week apart producing a database of 100 swallows. Swallows were repeat-analysed by six observers using software. Swallow variables were indicative of contractility, intrabolus pressure, and flow timing. Results. The average intraclass correlation coefficients (ICC) for intra- and interrater comparisons of all variable means showed substantial to excellent agreement (intrarater ICC 0.85–1.00; mean interrater ICC 0.77–1.00). Test-retest results were less reliable. ICC for test-retest comparisons ranged from slight to excellent depending on the class of variable. Contractility variables differed most in terms of test-retest reliability. Amongst contractility variables, UES basal pressure showed excellent test-retest agreement (mean ICC 0.94), measures of UES postrelaxation contractile pressure showed moderate to substantial test-retest agreement (mean Interrater ICC 0.47–0.67), and test-retest agreement of pharyngeal contractile pressure ranged from slight to substantial (mean Interrater ICC 0.15–0.61). Conclusions. Test-retest reliability of HRIM measures depends on the class of variable. Measures of bolus distension pressure and flow timing appear to be more test-retest reliable than measures of contractility. PMID:27190520
Validity and Reliability of Accelerometers in Patients With COPD: A SYSTEMATIC REVIEW.
Gore, Shweta; Blackwood, Jennifer; Guyette, Mary; Alsalaheen, Bara
2018-05-01
Reduced physical activity is associated with poor prognosis in chronic obstructive pulmonary disease (COPD). Accelerometers have greatly improved quantification of physical activity by providing information on step counts, body positions, energy expenditure, and magnitude of force. The purpose of this systematic review was to compare the validity and reliability of accelerometers used in patients with COPD. An electronic database search of MEDLINE and CINAHL was performed. Study quality was assessed with the Strengthening the Reporting of Observational Studies in Epidemiology checklist while methodological quality was assessed using the modified Quality Appraisal Tool for Reliability Studies. The search yielded 5392 studies; 25 met inclusion criteria. The SenseWear Pro armband reported high criterion validity under controlled conditions (r = 0.75-0.93) and high reliability (ICC = 0.84-0.86) for step counts. The DynaPort MiniMod demonstrated highest concurrent validity for step count using both video and manual methods. Validity of the SenseWear Pro armband varied between studies especially in free-living conditions, slower walking speeds, and with addition of weights during gait. A high degree of variability was found in the outcomes used and statistical analyses performed between studies, indicating a need for further studies to measure reliability and validity of accelerometers in COPD. The SenseWear Pro armband is the most commonly used accelerometer in COPD, but measurement properties are limited by gait speed variability and assistive device use. DynaPort MiniMod and Stepwatch accelerometers demonstrated high validity in patients with COPD but lack reliability data.
Network-based drug discovery by integrating systems biology and computational technologies
Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua
2013-01-01
Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768
2014-01-01
Protein biomarkers offer major benefits for diagnosis and monitoring of disease processes. Recent advances in protein mass spectrometry make it feasible to use this very sensitive technology to detect and quantify proteins in blood. To explore the potential of blood biomarkers, we conducted a thorough review to evaluate the reliability of data in the literature and to determine the spectrum of proteins reported to exist in blood with a goal of creating a Federated Database of Blood Proteins (FDBP). A unique feature of our approach is the use of a SQL database for all of the peptide data; the power of the SQL database combined with standard informatic algorithms such as BLAST and the statistical analysis system (SAS) allowed the rapid annotation and analysis of the database without the need to create special programs to manage the data. Our mathematical analysis and review shows that in addition to the usual secreted proteins found in blood, there are many reports of intracellular proteins and good agreement on transcription factors, DNA remodelling factors in addition to cellular receptors and their signal transduction enzymes. Overall, we have catalogued about 12,130 proteins identified by at least one unique peptide, and of these 3858 have 3 or more peptide correlations. The FDBP with annotations should facilitate testing blood for specific disease biomarkers. PMID:24476026
Kamitsuji, Shigeo; Matsuda, Takashi; Nishimura, Koichi; Endo, Seiko; Wada, Chisa; Watanabe, Kenji; Hasegawa, Koichi; Hishigaki, Haretsugu; Masuda, Masatoshi; Kuwahara, Yusuke; Tsuritani, Katsuki; Sugiura, Kenkichi; Kubota, Tomoko; Miyoshi, Shinji; Okada, Kinya; Nakazono, Kazuyuki; Sugaya, Yuki; Yang, Woosung; Sawamoto, Taiji; Uchida, Wataru; Shinagawa, Akira; Fujiwara, Tsutomu; Yamada, Hisaharu; Suematsu, Koji; Tsutsui, Naohisa; Kamatani, Naoyuki; Liou, Shyh-Yuh
2015-06-01
Japan Pharmacogenomics Data Science Consortium (JPDSC) has assembled a database for conducting pharmacogenomics (PGx) studies in Japanese subjects. The database contains the genotypes of 2.5 million single-nucleotide polymorphisms (SNPs) and 5 human leukocyte antigen loci from 2994 Japanese healthy volunteers, as well as 121 kinds of clinical information, including self-reports, physiological data, hematological data and biochemical data. In this article, the reliability of our data was evaluated by principal component analysis (PCA) and association analysis for hematological and biochemical traits by using genome-wide SNP data. PCA of the SNPs showed that all the samples were collected from the Japanese population and that the samples were separated into two major clusters by birthplace, Okinawa and other than Okinawa, as had been previously reported. Among 87 SNPs that have been reported to be associated with 18 hematological and biochemical traits in genome-wide association studies (GWAS), the associations of 56 SNPs were replicated using our data base. Statistical power simulations showed that the sample size of the JPDSC control database is large enough to detect genetic markers having a relatively strong association even when the case sample size is small. The JPDSC database will be useful as control data for conducting PGx studies to explore genetic markers to improve the safety and efficacy of drugs either during clinical development or in post-marketing.
Rajagopalan, Malolan S.; Khanna, Vineet K.; Leiter, Yaacov; Stott, Meghan; Showalter, Timothy N.; Dicker, Adam P.; Lawrence, Yaacov R.
2011-01-01
Purpose: A wiki is a collaborative Web site, such as Wikipedia, that can be freely edited. Because of a wiki's lack of formal editorial control, we hypothesized that the content would be less complete and accurate than that of a professional peer-reviewed Web site. In this study, the coverage, accuracy, and readability of cancer information on Wikipedia were compared with those of the patient-orientated National Cancer Institute's Physician Data Query (PDQ) comprehensive cancer database. Methods: For each of 10 cancer types, medically trained personnel scored PDQ and Wikipedia articles for accuracy and presentation of controversies by using an appraisal form. Reliability was assessed by using interobserver variability and test-retest reproducibility. Readability was calculated from word and sentence length. Results: Evaluators were able to rapidly assess articles (18 minutes/article), with a test-retest reliability of 0.71 and interobserver variability of 0.53. For both Web sites, inaccuracies were rare, less than 2% of information examined. PDQ was significantly more readable than Wikipedia: Flesch-Kincaid grade level 9.6 versus 14.1. There was no difference in depth of coverage between PDQ and Wikipedia (29.9, 34.2, respectively; maximum possible score 72). Controversial aspects of cancer care were relatively poorly discussed in both resources (2.9 and 6.1 for PDQ and Wikipedia, respectively, NS; maximum possible score 18). A planned subanalysis comparing common and uncommon cancers demonstrated no difference. Conclusion: Although the wiki resource had similar accuracy and depth as the professionally edited database, it was significantly less readable. Further research is required to assess how this influences patients' understanding and retention. PMID:22211130
Clement, Fiona; Zimmer, Scott; Dixon, Elijah; Ball, Chad G.; Heitman, Steven J.; Swain, Mark; Ghosh, Subrata
2016-01-01
Importance At the turn of the 21st century, studies evaluating the change in incidence of appendicitis over time have reported inconsistent findings. Objectives We compared the differences in the incidence of appendicitis derived from a pathology registry versus an administrative database in order to validate coding in administrative databases and establish temporal trends in the incidence of appendicitis. Design We conducted a population-based comparative cohort study to identify all individuals with appendicitis from 2000 to2008. Setting & Participants Two population-based data sources were used to identify cases of appendicitis: 1) a pathology registry (n = 8,822); and 2) a hospital discharge abstract database (n = 10,453). Intervention & Main Outcome The administrative database was compared to the pathology registry for the following a priori analyses: 1) to calculate the positive predictive value (PPV) of administrative codes; 2) to compare the annual incidence of appendicitis; and 3) to assess differences in temporal trends. Temporal trends were assessed using a generalized linear model that assumed a Poisson distribution and reported as an annual percent change (APC) with 95% confidence intervals (CI). Analyses were stratified by perforated and non-perforated appendicitis. Results The administrative database (PPV = 83.0%) overestimated the incidence of appendicitis (100.3 per 100,000) when compared to the pathology registry (84.2 per 100,000). Codes for perforated appendicitis were not reliable (PPV = 52.4%) leading to overestimation in the incidence of perforated appendicitis in the administrative database (34.8 per 100,000) as compared to the pathology registry (19.4 per 100,000). The incidence of appendicitis significantly increased over time in both the administrative database (APC = 2.1%; 95% CI: 1.3, 2.8) and pathology registry (APC = 4.1; 95% CI: 3.1, 5.0). Conclusion & Relevance The administrative database overestimated the incidence of appendicitis, particularly among perforated appendicitis. Therefore, studies utilizing administrative data to analyze perforated appendicitis should be interpreted cautiously. PMID:27820826
Block, Annette; Debode, Frédéric; Grohmann, Lutz; Hulin, Julie; Taverniers, Isabel; Kluga, Linda; Barbau-Piednoir, Elodie; Broeders, Sylvia; Huber, Ingrid; Van den Bulcke, Marc; Heinze, Petra; Berben, Gilbert; Busch, Ulrich; Roosens, Nancy; Janssen, Eric; Žel, Jana; Gruden, Kristina; Morisset, Dany
2013-08-22
Since their first commercialization, the diversity of taxa and the genetic composition of transgene sequences in genetically modified plants (GMOs) are constantly increasing. To date, the detection of GMOs and derived products is commonly performed by PCR-based methods targeting specific DNA sequences introduced into the host genome. Information available regarding the GMOs' molecular characterization is dispersed and not appropriately organized. For this reason, GMO testing is very challenging and requires more complex screening strategies and decision making schemes, demanding in return the use of efficient bioinformatics tools relying on reliable information. The GMOseek matrix was built as a comprehensive, online open-access tabulated database which provides a reliable, comprehensive and user-friendly overview of 328 GMO events and 247 different genetic elements (status: 18/07/2013). The GMOseek matrix is aiming to facilitate GMO detection from plant origin at different phases of the analysis. It assists in selecting the targets for a screening analysis, interpreting the screening results, checking the occurrence of a screening element in a group of selected GMOs, identifying gaps in the available pool of GMO detection methods, and designing a decision tree. The GMOseek matrix is an independent database with effective functionalities in a format facilitating transferability to other platforms. Data were collected from all available sources and experimentally tested where detection methods and certified reference materials (CRMs) were available. The GMOseek matrix is currently a unique and very valuable tool with reliable information on GMOs from plant origin and their present genetic elements that enables further development of appropriate strategies for GMO detection. It is flexible enough to be further updated with new information and integrated in different applications and platforms.
A meta-data based method for DNA microarray imputation.
Jörnsten, Rebecka; Ouyang, Ming; Wang, Hui-Yu
2007-03-29
DNA microarray experiments are conducted in logical sets, such as time course profiling after a treatment is applied to the samples, or comparisons of the samples under two or more conditions. Due to cost and design constraints of spotted cDNA microarray experiments, each logical set commonly includes only a small number of replicates per condition. Despite the vast improvement of the microarray technology in recent years, missing values are prevalent. Intuitively, imputation of missing values is best done using many replicates within the same logical set. In practice, there are few replicates and thus reliable imputation within logical sets is difficult. However, it is in the case of few replicates that the presence of missing values, and how they are imputed, can have the most profound impact on the outcome of downstream analyses (e.g. significance analysis and clustering). This study explores the feasibility of imputation across logical sets, using the vast amount of publicly available microarray data to improve imputation reliability in the small sample size setting. We download all cDNA microarray data of Saccharomyces cerevisiae, Arabidopsis thaliana, and Caenorhabditis elegans from the Stanford Microarray Database. Through cross-validation and simulation, we find that, for all three species, our proposed imputation using data from public databases is far superior to imputation within a logical set, sometimes to an astonishing degree. Furthermore, the imputation root mean square error for significant genes is generally a lot less than that of non-significant ones. Since downstream analysis of significant genes, such as clustering and network analysis, can be very sensitive to small perturbations of estimated gene effects, it is highly recommended that researchers apply reliable data imputation prior to further analysis. Our method can also be applied to cDNA microarray experiments from other species, provided good reference data are available.
Waugh, Shirley Moore; Bergquist-Beringer, Sandra
2016-06-01
In this descriptive multi-site study, we examined inter-rater agreement on 11 National Database of Nursing Quality Indicators(®) (NDNQI(®) ) pressure ulcer (PrU) risk and prevention measures. One hundred twenty raters at 36 hospitals captured data from 1,637 patient records. At each hospital, agreement between the most experienced rater and each other team rater was calculated for each measure. In the ratings studied, 528 patients were rated as "at risk" for PrU and, therefore, were included in calculations of agreement for the prevention measures. Prevalence-adjusted kappa (PAK) was used to interpret inter-rater agreement because prevalence of single responses was high. The PAK values for eight measures indicated "substantial" to "near perfect" agreement between most experienced and other team raters: Skin assessment on admission (.977, 95% CI [.966-.989]), PrU risk assessment on admission (.978, 95% CI [.964-.993]), Time since last risk assessment (.790, 95% CI [.729-.852]), Risk assessment method (.997, 95% CI [.991-1.0]), Risk status (.877, 95% CI [.838-.917]), Any prevention (.856, 95% CI [.76-.943]), Skin assessment (.956, 95% CI [.904-1.0]), and Pressure-redistribution surface use (.839, 95% CI [.763-.916]). For three intervention measures, PAK values fell below the recommended value of ≥.610: Routine repositioning (.577, 95% CI [.494-.661]), Nutritional support (.500, 95% CI [.418-.581]), and Moisture management (.556, 95% CI [.469-.643]). Areas of disagreement were identified. Findings provide support for the reliability of 8 of the 11 measures. Further clarification of data collection procedures is needed to improve reliability for the less reliable measures. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
2013-01-01
Background Since their first commercialization, the diversity of taxa and the genetic composition of transgene sequences in genetically modified plants (GMOs) are constantly increasing. To date, the detection of GMOs and derived products is commonly performed by PCR-based methods targeting specific DNA sequences introduced into the host genome. Information available regarding the GMOs’ molecular characterization is dispersed and not appropriately organized. For this reason, GMO testing is very challenging and requires more complex screening strategies and decision making schemes, demanding in return the use of efficient bioinformatics tools relying on reliable information. Description The GMOseek matrix was built as a comprehensive, online open-access tabulated database which provides a reliable, comprehensive and user-friendly overview of 328 GMO events and 247 different genetic elements (status: 18/07/2013). The GMOseek matrix is aiming to facilitate GMO detection from plant origin at different phases of the analysis. It assists in selecting the targets for a screening analysis, interpreting the screening results, checking the occurrence of a screening element in a group of selected GMOs, identifying gaps in the available pool of GMO detection methods, and designing a decision tree. The GMOseek matrix is an independent database with effective functionalities in a format facilitating transferability to other platforms. Data were collected from all available sources and experimentally tested where detection methods and certified reference materials (CRMs) were available. Conclusions The GMOseek matrix is currently a unique and very valuable tool with reliable information on GMOs from plant origin and their present genetic elements that enables further development of appropriate strategies for GMO detection. It is flexible enough to be further updated with new information and integrated in different applications and platforms. PMID:23965170
An evaluation of general practice websites in the UK.
Howitt, Alistair; Clement, Sarah; de Lusignan, Simon; Thiru, Krish; Goodwin, Daryl; Wells, Sally
2002-10-01
General practice websites are an emerging phenomenon, but there have been few critical evaluations of their content. Previously developed rating instruments to assess medical websites have been criticized for failing to report their reliability and validity. The purpose of this study was to develop a rating instrument for assessing UK general practice websites, and then to evaluate them critically. The STaRNet Website Assessment Tool (SWAT) was developed listing criteria that general practice websites may meet, which was then used to evaluate a random sample of websites drawn from an electronic database. A second assessor rated a subsample of the sites to assess the tool's inter-rater reliability. The setting was an information technology group of a general practice research network using a random sample of 108 websites identified from the database. The main outcome measures were identification of rating criteria and frequency counts from the website rating instrument. Ninety (93.3%) sites were accessible, of which 84 were UK general practice websites. Criteria most frequently met were those describing the scope of the website and their functionality. Apart from e-mail to practices, criteria related to electronic communication were rarely met. Criteria relating to the quality of information were least often met. Inter-rater reliability kappa values for the items in the tool ranged from -0.06 to 1.0 (mean 0.59). Values were >0.6 for 15 out of 25 criteria assessed in 40 sites which were rated by two assessors. General practice websites offer a wide range of information. They are technically satisfactory, but do not exploit fully the potential for electronic doctor-patient communication. The quality of information they provide is poor. The instrument may be developed as a template for general practices producing or revising their own websites.
Varni, James W; Limbers, Christine A; Burwinkle, Tasha M
2007-01-03
Health-related quality of life (HRQOL) measurement has emerged as an important health outcome in clinical trials, clinical practice improvement strategies, and healthcare services research and evaluation. While pediatric patient self-report should be considered the standard for measuring perceived HRQOL, there are circumstances when children are too young, too cognitively impaired, too ill or fatigued to complete a HRQOL instrument, and reliable and valid parent proxy-report instruments are needed in such cases. Further, it is typically parents' perceptions of their children's HRQOL that influences healthcare utilization. Data from the PedsQL DatabaseSM were utilized to test the reliability and validity of parent proxy-report at the individual age subgroup level for ages 2-16 years as recommended by recent FDA guidelines. The sample analyzed represents parent proxy-report age data on 13,878 children ages 2 to 16 years from the PedsQL 4.0 Generic Core Scales DatabaseSM. Parents were recruited from general pediatric clinics, subspecialty clinics, and hospitals in which their children were being seen for well-child checks, mild acute illness, or chronic illness care (n = 3,718, 26.8%), and from a State Children's Health Insurance Program (SCHIP) in California (n = 10,160, 73.2%). The percentage of missing item responses for the parent proxy-report sample as a whole was 2.1%, supporting feasibility. The majority of the parent proxy-report scales across the age subgroups exceeded the minimum internal consistency reliability standard of 0.70 required for group comparisons, while the Total Scale Scores across the age subgroups approached or exceeded the reliability criterion of 0.90 recommended for analyzing individual patient scale scores. Construct validity was demonstrated utilizing the known groups approach. For each PedsQL scale and summary score, across age subgroups, healthy children demonstrated a statistically significant difference in HRQOL (better HRQOL) than children with a known chronic health condition, with most effect sizes in the medium to large effect size range. The results demonstrate the feasibility, reliability, and validity of parent proxy-report at the individual age subgroup for ages 2-16 years. These analyses are consistent with recent FDA guidelines which require instrument development and validation testing for children and adolescents within fairly narrow age groupings and which determine the lower age limit at which reliable and valid responses across age categories are achievable. Even as pediatric patient self-report is advocated, there remains a fundamental role for parent proxy-report in pediatric clinical trials and health services research.
Developing New Rainfall Estimates to Identify the Likelihood of Agricultural Drought in Mesoamerica
NASA Astrophysics Data System (ADS)
Pedreros, D. H.; Funk, C. C.; Husak, G. J.; Michaelsen, J.; Peterson, P.; Lasndsfeld, M.; Rowland, J.; Aguilar, L.; Rodriguez, M.
2012-12-01
The population in Central America was estimated at ~40 million people in 2009, with 65% in rural areas directly relying on local agricultural production for subsistence, and additional urban populations relying on regional production. Mapping rainfall patterns and values in Central America is a complex task due to the rough topography and the influence of two oceans on either side of this narrow land mass. Characterization of precipitation amounts both in time and space is of great importance for monitoring agricultural food production for food security analysis. With the goal of developing reliable rainfall fields, the Famine Early warning Systems Network (FEWS NET) has compiled a dense set of historical rainfall stations for Central America through cooperation with meteorological services and global databases. The station database covers the years 1900-present with the highest density between 1970-2011. Interpolating station data by themselves does not provide a reliable result because it ignores topographical influences which dominate the region. To account for this, climatological rainfall fields were used to support the interpolation of the station data using a modified Inverse Distance Weighting process. By blending the station data with the climatological fields, a historical rainfall database was compiled for 1970-2011 at a 5km resolution for every five day interval. This new database opens the door to analysis such as the impact of sea surface temperature on rainfall patterns, changes to the typical dry spell during the rainy season, characterization of drought frequency and rainfall trends, among others. This study uses the historical database to identify the frequency of agricultural drought in the region and explores possible changes in precipitation patterns during the past 40 years. A threshold of 500mm of rainfall during the growing season was used to define agricultural drought for maize. This threshold was selected based on assessments of crop conditions from previous seasons, and was identified as an amount roughly corresponding to significant crop loss for maize, a major crop in most of the region. Results identify areas in central Honduras and Nicaragua as well as the Altiplano region in Guatemala that experienced 15 seasons of agricultural drought for the period May-July during the years 1970-2000. Preliminary results show no clear trend in rainfall, but further investigation is needed to confirm that agricultural drought is not becoming more frequent in this region.
Varni, James W; Limbers, Christine A; Burwinkle, Tasha M
2007-01-03
The last decade has evidenced a dramatic increase in the development and utilization of pediatric health-related quality of life (HRQOL) measures in an effort to improve pediatric patient health and well-being and determine the value of healthcare services. The emerging paradigm shift toward patient-reported outcomes (PROs) in clinical trials has provided the opportunity to further emphasize the value and essential need for pediatric patient self-reported outcomes measurement. Data from the PedsQL DatabaseSM were utilized to test the hypothesis that children as young as 5 years of age can reliably and validly report their HRQOL. The sample analyzed represented child self-report age data on 8,591 children ages 5 to 16 years from the PedsQL 4.0 Generic Core Scales DatabaseSM. Participants were recruited from general pediatric clinics, subspecialty clinics, and hospitals in which children were being seen for well-child checks, mild acute illness, or chronic illness care (n = 2,603, 30.3%), and from a State Children's Health Insurance Program (SCHIP) in California (n = 5,988, 69.7%). Items on the PedsQL 4.0 Generic Core Scales had minimal missing responses for children as young as 5 years old, supporting feasibility. The majority of the child self-report scales across the age subgroups, including for children as young as 5 years, exceeded the minimum internal consistency reliability standard of 0.70 required for group comparisons, while the Total Scale Scores across the age subgroups approached or exceeded the reliability criterion of 0.90 recommended for analyzing individual patient scale scores. Construct validity was demonstrated utilizing the known groups approach. For each PedsQL scale and summary score, across age subgroups, including children as young as 5 years, healthy children demonstrated a statistically significant difference in HRQOL (better HRQOL) than children with a known chronic health condition, with most effect sizes in the medium to large effect size range. The results demonstrate that children as young as the 5 year old age subgroup can reliably and validly self-report their HRQOL when given the opportunity to do so with an age-appropriate instrument. These analyses are consistent with recent FDA guidelines which require instrument development and validation testing for children and adolescents within fairly narrow age groupings and which determine the lower age limit at which children can provide reliable and valid responses across age categories.
Ellison, GTH; Richter, LM; de Wet, T; Harris, HE; Griesel, RD; McIntyre, JA
2007-01-01
This study examined the reliability of hand-written and computerised records of birth data collected during the Birth to Ten study at Baragwanath Hospital in Soweto. The reliability of record-keeping in hand-written obstetric and neonatal files was assessed by comparing duplicate records of six different variables abstracted from six different sections in these files. The reliability of computerised record-keeping was assessed by comparing the original hand-written record of each variable with records contained in the hospital’s computerised database. These data sets displayed similar levels of reliability which suggests that similar errors occurred when data were transcribed from one section of the files to the next, and from these files to the computerised database. In both sets of records reliability was highest for the categorical variable infant sex, and for those continuous variables (such as maternal age and gravidity) recorded with unambiguous units. Reliability was lower for continuous variables that could be recorded with different levels of precision (such as birth weight), those that were occasionally measured more than once, and those that could be measured using more than one measurement technique (such as gestational age). Reducing the number of times records are transcribed, categorising continuous variables, and standardising the techniques used for measuring and recording variables would improve the reliability of both hand-written and computerised data sets. OPSOMMING In hierdie studie is die betroubaarheid van handgeskrewe en gerekenariseerde rekords van ge boortedata ondersoek, wat versamel is gedurende die ‘Birth to Ten’ -studie aan die Baragwanath hospitaal in Soweto. Die betroubaarheid van handgeskrewe verloskundige en pasgeboortelike rekords is beoordeel deur duplikaatrekords op ses verskillende verander likes te vergelyk, wat onttrek is uit ses verskillende dele van die betrokke lêers. Die gerekenariseerde rekords se betroubaarheid is beoordeel deur die oorspronklike geskrewe rekord van elke veranderlike te vergelyk met rekords wat beskikbaar is in die hospitaal se gerekenariseerde databasis Hierdie datastelle her vergelykbare vlakke van betroubaarheid getoon, waaruit afgelei kan word dat soortgelyke foute voorkom warmeer data oorgeplaas word vaneen deeivan ’n lêer na ’n ander, en vanaf die lêer na die gerekenariseerde databasis. In albei stelle rekords was die betroubaarheid die hoogste vir die kategoriese veranderlike suigeling se geslag, en vir daardie kontinue veranderlikes (soos moeder se ouderdom en gravida) wat in terme van ondubbelsinmge eenhede gekodeer kan word. Kontinue veranderlikes wat op wisselende vlakke van akkuratheid gemeet word (soos gewig met geboorte), veranderlikes wat soms meer as een keer gemeet is, en veranderlikes wat voigens meer as een metingstegniek bepaal is (soos draagtydsouderdom), was minder betroubaar Deur die aantal kere wat rekords oorgeskryf moet word te verminder, kontinue veranderlikes tat kategoriese veranderlikes te wysig. en tegnieke vir meting en aantekening van veranderlikes te standardiseer, kan die betroubaarheid van sowel handgeskrewe as gerekenariseerde datastelle verbeter word. PMID:9287552
NASA Astrophysics Data System (ADS)
Chen, Shuo-Bin; Liu, Guo-Cai; Gu, Lian-Quan; Huang, Zhi-Shu; Tan, Jia-Heng
2018-02-01
Design of small molecules targeted at human telomeric G-quadruplex DNA is an extremely active research area. Interestingly, the telomeric G-quadruplex is a highly polymorphic structure. Changes in its conformation upon small molecule binding may be a powerful method to achieve a desired biological effect. However, the rational development of small molecules capable of regulating conformational change of telomeric G-quadruplex structures is still challenging. In this study, we developed a reliable ligand-based pharmacophore model based on isaindigotone derivatives with conformational change activity toward telomeric G-quadruplex DNA. Furthermore, virtual screening of database was conducted using this pharmacophore model and benzopyranopyrimidine derivatives in the database were identified as a strong inducer of the telomeric G-quadruplex DNA conformation, transforming it from hybrid-type structure to parallel structure.
McCutchan, E. A.; Brown, D. A.; Sonzogni, A. A.
2017-03-30
Databases of evaluated nuclear data form a cornerstone on which we build academic nuclear structure physics, reaction physics, astrophysics, and many applied nuclear technologies. In basic research, nuclear data are essential for selecting, designing and conducting experiments, and for the development and testing of theoretical models to understand the fundamental properties of atomic nuclei. Likewise, the applied fields of nuclear power, homeland security, stockpile stewardship and nuclear medicine, all have deep roots requiring evaluated nuclear data. Each of these fields requires rapid and easy access to up-to-date, comprehensive and reliable databases. The DOE-funded US Nuclear Data Program is a specificmore » and coordinated effort tasked to compile, evaluate and disseminate nuclear structure and reaction data such that it can be used by the world-wide nuclear physics community.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holdmann, Gwen
2016-12-20
Alaska is considered a world leader in renewable energy and microgrid technologies. Our workplan started as an analysis of existing wind-diesel systems, many of which were not performing as designed. We aimed to analyze and understand the performance of existing wind-diesel systems, to establish a knowledge baseline from which to work towards improvement and maximizing renewable energy utilization. To accomplish this, we worked with the Alaska Energy Authority to develop a comprehensive database of wind system experience, including underlying climatic and socioeconomic characteristics, actual operating data, projected vs. actual capital and O&M costs, and a catalogue of catastrophic anomalies. Thismore » database formed the foundation for the rest of the research program, with the overarching goal of delivering low-cost, reliable, and sustainable energy to diesel microgrids.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCutchan, E. A.; Brown, D. A.; Sonzogni, A. A.
Databases of evaluated nuclear data form a cornerstone on which we build academic nuclear structure physics, reaction physics, astrophysics, and many applied nuclear technologies. In basic research, nuclear data are essential for selecting, designing and conducting experiments, and for the development and testing of theoretical models to understand the fundamental properties of atomic nuclei. Likewise, the applied fields of nuclear power, homeland security, stockpile stewardship and nuclear medicine, all have deep roots requiring evaluated nuclear data. Each of these fields requires rapid and easy access to up-to-date, comprehensive and reliable databases. The DOE-funded US Nuclear Data Program is a specificmore » and coordinated effort tasked to compile, evaluate and disseminate nuclear structure and reaction data such that it can be used by the world-wide nuclear physics community.« less
Preclinical tests of an android based dietary logging application.
Kósa, István; Vassányi, István; Pintér, Balázs; Nemes, Márta; Kámánné, Krisztina; Kohut, László
2014-01-01
The paper describes the first, preclinical evaluation of a dietary logging application developed at the University of Pannonia, Hungary. The mobile user interface is briefly introduced. The three evaluation phases examined the completeness and contents of the dietary database and the time expenditure of the mobile based diet logging procedure. The results show that although there are substantial individual differences between various dietary databases, the expectable difference with respect to nutrient contents is below 10% on typical institutional menu list. Another important finding is that the time needed to record the meals can be reduced to about 3 minutes daily especially if the user uses set-based search. a well designed user interface on a mobile device is a viable and reliable way for a personalized lifestyle support service.
The sunspot databases of the Debrecen Observatory
NASA Astrophysics Data System (ADS)
Baranyi, Tünde; Gyori, Lajos; Ludmány, András
2015-08-01
We present the sunspot data bases and online tools available in the Debrecen Heliophysical Observatory: the DPD (Debrecen Photoheliographic Data, 1974 -), the SDD (SOHO/MDI-Debrecen Data, 1996-2010), the HMIDD (SDO/HMI-Debrecen Data, HMIDD, 2010-), the revised version of Greenwich Photoheliographic Data (GPR, 1874-1976) presented together with the Hungarian Historical Solar Drawings (HHSD, 1872-1919). These are the most detailed and reliable documentations of the sunspot activity in the relevant time intervals. They are very useful for studying sunspot group evolution on various time scales from hours to weeks. Time-dependent differences between the available long-term sunspot databases are investigated and cross-calibration factors are determined between them. This work has received funding from the European Community's Seventh Framework Programme (FP7/2012-2015) under grant agreement No. 284461 (eHEROES).
A plan for the North American Bat Monitoring Program (NABat)
Loeb, Susan C.; Rodhouse, Thomas J.; Ellison, Laura E.; Lausen, Cori L.; Reichard, Jonathan D.; Irvine, Kathryn M.; Ingersoll, Thomas E.; Coleman, Jeremy; Thogmartin, Wayne E.; Sauer, John R.; Francis, Charles M.; Bayless, Mylea L.; Stanley, Thomas R.; Johnson, Douglas H.
2015-01-01
The purpose of the North American Bat Monitoring Program (NABat) is to create a continent-wide program to monitor bats at local to rangewide scales that will provide reliable data to promote effective conservation decisionmaking and the long-term viability of bat populations across the continent. This is an international, multiagency program. Four approaches will be used to gather monitoring data to assess changes in bat distributions and abundances: winter hibernaculum counts, maternity colony counts, mobile acoustic surveys along road transects, and acoustic surveys at stationary points. These monitoring approaches are described along with methods for identifying species recorded by acoustic detectors. Other chapters describe the sampling design, the database management system (Bat Population Database), and statistical approaches that can be used to analyze data collected through this program.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
Mohseni Bandpei, Mohammad A; Rahmani, Nahid; Majdoleslam, Basir; Abdollahi, Iraj; Ali, Shabnam Shah; Ahmad, Ashfaq
2014-09-01
The purpose of this study was to review the literature to determine whether surface electromyography (EMG) is a reliable tool to assess paraspinal muscle fatigue in healthy subjects and in patients with low back pain (LBP). A literature search for the period of 2000 to 2012 was performed, using PubMed, ProQuest, Science Direct, EMBASE, OVID, CINAHL, and MEDLINE databases. Electromyography, reliability, median frequency, paraspinal muscle, endurance, low back pain, and muscle fatigue were used as keywords. The literature search yielded 178 studies using the above keywords. Twelve articles were selected according to the inclusion criteria of the study. In 7 of the 12 studies, the surface EMG was only applied in healthy subjects, and in 5 studies, the reliability of surface EMG was investigated in patients with LBP or a comparison with a control group. In all of these studies, median frequency was shown to be a reliable EMG parameter to assess paraspinal muscles fatigue. There was a wide variation among studies in terms of methodology, surface EMG parameters, electrode location, procedure, and homogeneity of the study population. The results suggest that there seems to be a convincing body of evidence to support the merit of surface EMG in the assessment of paraspinal muscle fatigue in healthy subject and in patients with LBP. Copyright © 2014 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
de Albuquerque, Priscila Maria Nascimento Martins; de Alencar, Geisa Guimarães; de Oliveira, Daniela Araújo; de Siqueira, Gisela Rocha
2018-01-01
The aim of this study was to examine and interpret the concordance, accuracy, and reliability of photogrammetric protocols available in the literature for evaluating cervical lordosis in an adult population aged 18 to 59 years. A systematic search of 6 electronic databases (MEDLINE via PubMed, LILACS, CINAHL, Scopus, ScienceDirect, and Web of Science) located studies that assessed the reliability and/or concordance and/or accuracy of photogrammetric protocols for evaluating cervical lordosis, compared with radiography. Articles published through April 2016 were selected. Two independent reviewers used a critical appraisal tool (QUADAS and QAREL) to assess the quality of the selected studies. Two studies were included in the review and had high levels of reliability (intraclass correlation coefficient: 0.974-0.98). Only 1 study assessed the concordance between the methods, which was calculated using Pearson's correlation coefficient. To date, the accuracy of photogrammetry has not been investigated thoroughly. We encountered no study in the literature that investigated the accuracy of photogrammetry in diagnosing hyperlordosis of cervical spine. However, both current studies report high levels of intra- and interrater reliability. To increase the level of evidence of photogrammetry in the evaluation of cervical lordosis, it is necessary to conduct further studies using a larger sample to increase the external validity of the findings. Copyright © 2018. Published by Elsevier Inc.
A study of automotive workers anthropometric physical characteristics from Mexico Northwest.
Lucero-Duarte, Karla; de la Vega-Bustillos, Enrique; López-Millán, Francisco
2012-01-01
Due to the lack of anthropometric information in northwest Mexico, we did an anthropometric study that represents the population physical characteristics and that is reliable for the design or redesign of workstations. The study was divided in two phases. The first one was the anthropometric study of 2900 automotive industry workers in northwest of Mexico. The study includes 40 body dimensions of 2345 males and 555 females personalized to be used in future researches. Second phase includes compared anthropometric characteristics of population reported in four Mexican studies and a Colombian study against the current study. Benefits of this project are: a reliable database of anthropometric characteristic of automotive industry population for workstations design or redesign that match with the users, increase product quality and reduce economic, medical and union complains.
Professional monitoring and critical incident reporting using personal digital assistants.
Bent, Paul D; Bolsin, Stephen N; Creati, Bernie J; Patrick, Andrew J; Colson, Mark E
2002-11-04
To assess the practicality of using personal digital assistants (PDAs) for the collection of logbook data, procedural performance data and critical incident reports in anaesthetic trainees. Pilot study. Two tertiary referral centres (in Victoria and New Zealand) and a large district hospital in Queensland. Six accredited Australian and New Zealand College of Anaesthetists (ANZCA) registrars and their ANZCA training supervisors. Registrars and supervisors underwent initial training for one hour, and supervisors were provided with ongoing support. Reliable use of the program, average time for data entry and number of procedures logged. ANZCA trainees reliably enter data into PDAs. The data can be transferred to a central database, where they can be remotely analysed before results are fed back to trainees. This technology can be used to monitor professional performance in ANZCA trainees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Rosseel, Thomas M; Field, Kevin G
Life extensions of nuclear power plants to 60 and potentially 80 years of operation have renewed interest in long-term material degradation. One material being considered is concrete, with a particular focus on radiation-induced effects. Based on the projected neutron fluence values (E > 0.1 MeV) in the concrete biological shields of the US pressurized water reactor fleet and the available data on radiation effects on concrete, some decrease in mechanical properties of concrete cannot be ruled out during extended operation beyond 60 years. An expansion of the irradiated concrete database and a reliable determination of relevant neutron fluence energy cutoffmore » value are necessary to ensure reliable risk assessment for extended operation of nuclear power plants.« less
Machouart, Marie; Morio, Florent; Sabou, Marcela; Kauffmann-LaCroix, Catherine; Contet-Audonneau, Nelly; Candolfi, Ermanno; Letscher-Bru, Valérie
2016-01-01
ABSTRACT The genus Malassezia comprises commensal yeasts on human skin. These yeasts are involved in superficial infections but are also isolated in deeper infections, such as fungemia, particularly in certain at-risk patients, such as neonates or patients with parenteral nutrition catheters. Very little is known about Malassezia epidemiology and virulence. This is due mainly to the difficulty of distinguishing species. Currently, species identification is based on morphological and biochemical characteristics. Only molecular biology techniques identify species with certainty, but they are time-consuming and expensive. The aim of this study was to develop and evaluate a matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) database for identifying Malassezia species by mass spectrometry. Eighty-five Malassezia isolates from patients in three French university hospitals were investigated. Each strain was identified by internal transcribed spacer sequencing. Forty-five strains of the six species Malassezia furfur, M. sympodialis, M. slooffiae, M. globosa, M. restricta, and M. pachydermatis allowed the creation of a MALDI-TOF database. Forty other strains were used to test this database. All strains were identified by our Malassezia database with log scores of >2.0, according to the manufacturer's criteria. Repeatability and reproducibility tests showed a coefficient of variation of the log score values of <10%. In conclusion, our new Malassezia database allows easy, fast, and reliable identification of Malassezia species. Implementation of this database will contribute to a better, more rapid identification of Malassezia species and will be helpful in gaining a better understanding of their epidemiology. PMID:27795342
Denis, Julie; Machouart, Marie; Morio, Florent; Sabou, Marcela; Kauffmann-LaCroix, Catherine; Contet-Audonneau, Nelly; Candolfi, Ermanno; Letscher-Bru, Valérie
2017-01-01
The genus Malassezia comprises commensal yeasts on human skin. These yeasts are involved in superficial infections but are also isolated in deeper infections, such as fungemia, particularly in certain at-risk patients, such as neonates or patients with parenteral nutrition catheters. Very little is known about Malassezia epidemiology and virulence. This is due mainly to the difficulty of distinguishing species. Currently, species identification is based on morphological and biochemical characteristics. Only molecular biology techniques identify species with certainty, but they are time-consuming and expensive. The aim of this study was to develop and evaluate a matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) database for identifying Malassezia species by mass spectrometry. Eighty-five Malassezia isolates from patients in three French university hospitals were investigated. Each strain was identified by internal transcribed spacer sequencing. Forty-five strains of the six species Malassezia furfur, M. sympodialis, M. slooffiae, M. globosa, M. restricta, and M. pachydermatis allowed the creation of a MALDI-TOF database. Forty other strains were used to test this database. All strains were identified by our Malassezia database with log scores of >2.0, according to the manufacturer's criteria. Repeatability and reproducibility tests showed a coefficient of variation of the log score values of <10%. In conclusion, our new Malassezia database allows easy, fast, and reliable identification of Malassezia species. Implementation of this database will contribute to a better, more rapid identification of Malassezia species and will be helpful in gaining a better understanding of their epidemiology. Copyright © 2016 Denis et al.
Lazzari, Barbara; Caprera, Andrea; Cestaro, Alessandro; Merelli, Ivan; Del Corvo, Marcello; Fontana, Paolo; Milanesi, Luciano; Velasco, Riccardo; Stella, Alessandra
2009-06-29
Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.
biochem4j: Integrated and extensible biochemical knowledge through graph databases.
Swainston, Neil; Batista-Navarro, Riza; Carbonell, Pablo; Dobson, Paul D; Dunstan, Mark; Jervis, Adrian J; Vinaixa, Maria; Williams, Alan R; Ananiadou, Sophia; Faulon, Jean-Loup; Mendes, Pedro; Kell, Douglas B; Scrutton, Nigel S; Breitling, Rainer
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and-crucially-the relationships between them. Such a resource should be extensible, such that newly discovered relationships-for example, those between novel, synthetic enzymes and non-natural products-can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists.
biochem4j: Integrated and extensible biochemical knowledge through graph databases
Batista-Navarro, Riza; Dunstan, Mark; Jervis, Adrian J.; Vinaixa, Maria; Ananiadou, Sophia; Faulon, Jean-Loup; Kell, Douglas B.
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and–crucially–the relationships between them. Such a resource should be extensible, such that newly discovered relationships–for example, those between novel, synthetic enzymes and non-natural products–can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists. PMID:28708831
Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G
2007-10-05
An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.
Central Plant Optimization for Waste Energy Reduction (CPOWER)
2016-12-01
data such as windspeed and solar radiation is recorded in CPOWER. For these periods, the following data fields from the CPOWER database and the weather...The solar radiation data did not appear reliable in the weather dataset for the location, and hence we did not use this. The energy consumption...that several factors affect the total energy consumption of the chiller plant and additional data and additional factors (e.g., solar insolation) may be
Family System of Advanced Charring Ablators for Planetary Exploration Missions
NASA Technical Reports Server (NTRS)
Congdon, William M.; Curry, Donald M.
2005-01-01
Advanced Ablators Program Objectives: 1) Flight-ready(TRL-6) ablative heat shields for deep-space missions; 2) Diversity of selection from family-system approach; 3) Minimum weight systems with high reliability; 4) Optimized formulations and processing; 5) Fully characterized properties; and 6) Low-cost manufacturing. Definition and integration of candidate lightweight structures. Test and analysis database to support flight-vehicle engineering. Results from production scale-up studies and production-cost analyses.
Reliability of diagnosis and clinical efficacy of visceral osteopathy: a systematic review.
Guillaud, Albin; Darbois, Nelly; Monvoisin, Richard; Pinsault, Nicolas
2018-02-17
In 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy. Databases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis. Eight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome. The results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent. The review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .
Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego
2017-12-01
Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.
2016-10-01
Reports an error in "Reliability Generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R)" by Hayley M. Herrington, Timothy B. Smith, Erika Feinauer and Derek Griner ( Journal of Counseling Psychology , Advanced Online Publication, Mar 17, 2016, np). The name of author Erika Feinauer was misspelled as Erika Feinhauer. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-13160-001.) Individuals' strength of ethnic identity has been linked with multiple positive indicators, including academic achievement and overall psychological well-being. The measure researchers use most often to assess ethnic identity, the Multigroup Ethnic Identity Measure (MEIM), underwent substantial revision in 2007. To inform scholars investigating ethnic identity, we performed a reliability generalization analysis on data from the revised version (MEIM-R) and compared it with data from the original MEIM. Random-effects weighted models evaluated internal consistency coefficients (Cronbach's alpha). Reliability coefficients for the MEIM-R averaged α = .88 across 37 samples, a statistically significant increase over the average of α = .84 for the MEIM across 75 studies. Reliability coefficients for the MEIM-R did not differ across study and participant characteristics such as sample gender and ethnic composition. However, consistently lower reliability coefficients averaging α = .81 were found among participants with low levels of education, suggesting that greater attention to data reliability is warranted when evaluating the ethnic identity of individuals such as middle-school students. Future research will be needed to ascertain whether data with other measures of aspects of personal identity (e.g., racial identity, gender identity) also differ as a function of participant level of education and associated cognitive or maturation processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data
Lux, Markus; Kruger, Jan; Rinke, Christian; ...
2016-12-20
A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less
acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lux, Markus; Kruger, Jan; Rinke, Christian
A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less
Routh, Shreya; Nandagopal, Krishnadas
2017-01-01
Resveratrol, taxol, podophyllotoxin, withanolides and their derivatives find applications in anti-cancer therapy. They are plant-derived compounds whose chemical structures and synthesis limit their natural availability and restrict a large-scale industrial production. Hence, their production by various biotechnological approaches may hold promise for a continuous and reliable mode of supply. We review process and product patents in this regard. Accordingly, we provide a general outline to search the freely accessible WIPO, EPO, USPTO and Cambia databases with several keywords and patent codes. We have tabulated both granted and filed patents from the said databases. We retrieved ~40 patents from these databases. Novel biotechnological processes for production of these anticancer compounds include Agrobacterium rhizogenes-mediated hairy root culture, suspension culture, cell culture with elicitors, use of recombinant microorganisms, and bioreactors among others. The results are indicative of being both database-specific as well as queryspecific. A ten-year search window yielded 33 patents. The utility of the search strategy is discussed in the light of biotechnological developments in the field. Those who examine patent literature using similar search strategies may complement their knowledge obtained from perusal of mainstream journal resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Verification of road databases using multiple road models
NASA Astrophysics Data System (ADS)
Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian
2017-08-01
In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.
NASA Astrophysics Data System (ADS)
Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.
2017-10-01
Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.
Development of the updated system of city underground pipelines based on Visual Studio
NASA Astrophysics Data System (ADS)
Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong
2009-10-01
Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.
A manual for a laboratory information management system (LIMS) for light stable isotopes
Coplen, Tyler B.
1997-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
A manual for a Laboratory Information Management System (LIMS) for light stable isotopes
Coplen, Tyler B.
1998-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
The problem with coal-waste dumps inventory in Upper Silesian Coal Basin
NASA Astrophysics Data System (ADS)
Abramowicz, Anna; Chybiorz, Ryszard
2017-04-01
Coal-waste dumps are the side effect of coal mining, which has lasted in Poland for 250 years. They have negative influence on the landscape and the environment, and pollute soil, vegetation and groundwater. Their number, size and shape is changing over time, as new wastes have been produced and deposited changing their shape and enlarging their size. Moreover deposited wastes, especially overburned, are exploited for example road construction, also causing the shape and size change up to disappearing. Many databases and inventory systems were created in order to control these hazards, but some disadvantages prevent reliable statistics. Three representative databases were analyzed according to their structure and type of waste dumps description, classification and visualization. The main problem is correct classification of dumps in terms of their name and type. An additional difficulty is the accurate quantitative description (area and capacity). A complex database was created as a result of comparison, verification of the information contained in existing databases and its supplementation based on separate documentation. A variability analysis of coal-waste dumps over time is also included. The project has been financed from the funds of the Leading National Research Centre (KNOW) received by the Centre for Polar Studies for the period 2014-2018.
Zhao, Yongsheng; Zhao, Jihong; Huang, Ying; Zhou, Qing; Zhang, Xiangping; Zhang, Suojiang
2014-08-15
A comprehensive database on toxicity of ionic liquids (ILs) is established. The database includes over 4000 pieces of data. Based on the database, the relationship between IL's structure and its toxicity has been analyzed qualitatively. Furthermore, Quantitative Structure-Activity relationships (QSAR) model is conducted to predict the toxicities (EC50 values) of various ILs toward the Leukemia rat cell line IPC-81. Four parameters selected by the heuristic method (HM) are used to perform the studies of multiple linear regression (MLR) and support vector machine (SVM). The squared correlation coefficient (R(2)) and the root mean square error (RMSE) of training sets by two QSAR models are 0.918 and 0.959, 0.258 and 0.179, respectively. The prediction R(2) and RMSE of QSAR test sets by MLR model are 0.892 and 0.329, by SVM model are 0.958 and 0.234, respectively. The nonlinear model developed by SVM algorithm is much outperformed MLR, which indicates that SVM model is more reliable in the prediction of toxicity of ILs. This study shows that increasing the relative number of O atoms of molecules leads to decrease in the toxicity of ILs. Copyright © 2014 Elsevier B.V. All rights reserved.
Tatem, Andrew J; Guerra, Carlos A; Kabaria, Caroline W; Noor, Abdisalan M; Hay, Simon I
2008-10-27
The efficient allocation of financial resources for malaria control and the optimal distribution of appropriate interventions require accurate information on the geographic distribution of malaria risk and of the human populations it affects. Low population densities in rural areas and high population densities in urban areas can influence malaria transmission substantially. Here, the Malaria Atlas Project (MAP) global database of Plasmodium falciparum parasite rate (PfPR) surveys, medical intelligence and contemporary population surfaces are utilized to explore these relationships and other issues involved in combining malaria risk maps with those of human population distribution in order to define populations at risk more accurately. First, an existing population surface was examined to determine if it was sufficiently detailed to be used reliably as a mask to identify areas of very low and very high population density as malaria free regions. Second, the potential of international travel and health guidelines (ITHGs) for identifying malaria free cities was examined. Third, the differences in PfPR values between surveys conducted in author-defined rural and urban areas were examined. Fourth, the ability of various global urban extent maps to reliably discriminate these author-based classifications of urban and rural in the PfPR database was investigated. Finally, the urban map that most accurately replicated the author-based classifications was analysed to examine the effects of urban classifications on PfPR values across the entire MAP database. Masks of zero population density excluded many non-zero PfPR surveys, indicating that the population surface was not detailed enough to define areas of zero transmission resulting from low population densities. In contrast, the ITHGs enabled the identification and mapping of 53 malaria free urban areas within endemic countries. Comparison of PfPR survey results showed significant differences between author-defined 'urban' and 'rural' designations in Africa, but not for the remainder of the malaria endemic world. The Global Rural Urban Mapping Project (GRUMP) urban extent mask proved most accurate for mapping these author-defined rural and urban locations, and further sub-divisions of urban extents into urban and peri-urban classes enabled the effects of high population densities on malaria transmission to be mapped and quantified. The availability of detailed, contemporary census and urban extent data for the construction of coherent and accurate global spatial population databases is often poor. These known sources of uncertainty in population surfaces and urban maps have the potential to be incorporated into future malaria burden estimates. Currently, insufficient spatial information exists globally to identify areas accurately where population density is low enough to impact upon transmission. Medical intelligence does however exist to reliably identify malaria free cities. Moreover, in Africa, urban areas that have a significant effect on malaria transmission can be mapped.
Sivapalarajah, Shayeeshan; Krishnakumar, Mathangi; Bickerstaffe, Harry; Chan, YikYing; Clarkson, Joseph; Hampden-Martin, Alistair; Mirza, Ahmad; Tanti, Matthew; Marson, Anthony; Pirmohamed, Munir; Mirza, Nasir
2018-02-01
Current antiepileptic drugs (AEDs) have several shortcomings. For example, they fail to control seizures in 30% of patients. Hence, there is a need to identify new AEDs. Drug repurposing is the discovery of new indications for approved drugs. This drug "recycling" offers the potential of significant savings in the time and cost of drug development. Many drugs licensed for other indications exhibit antiepileptic efficacy in animal models. Our aim was to create a database of "prescribable" drugs, approved for other conditions, with published evidence of efficacy in animal models of epilepsy, and to collate data that would assist in choosing the most promising candidates for drug repurposing. The database was created by the following: (1) computational literature-mining using novel software that identifies Medline abstracts containing the name of a prescribable drug, a rodent model of epilepsy, and a phrase indicating seizure reduction; then (2) crowdsourced manual curation of the identified abstracts. The final database includes 173 drugs and 500 abstracts. It is made freely available at www.liverpool.ac.uk/D3RE/PDE3. The database is reliable: 94% of the included drugs have corroborative evidence of efficacy in animal models (for example, evidence from multiple independent studies). The database includes many drugs that are appealing candidates for repurposing, as they are widely accepted by prescribers and patients-the database includes half of the 20 most commonly prescribed drugs in England-and they target many proteins involved in epilepsy but not targeted by current AEDs. It is important to note that the drugs are of potential relevance to human epilepsy-the database is highly enriched with drugs that target proteins of known causal human epilepsy genes (Fisher's exact test P-value < 3 × 10 -5 ). We present data to help prioritize the most promising candidates for repurposing from the database. The PDE3 database is an important new resource for drug repurposing research in epilepsy. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.
Abraha, Iosief; Giovannini, Gianni; Serraino, Diego; Fusco, Mario; Montedori, Alessandro
2016-03-18
Breast, lung and colorectal cancers constitute the most common cancers worldwide and their epidemiology, related health outcomes and quality indicators can be studied using administrative healthcare databases. To constitute a reliable source for research, administrative healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases 9th and 10th revision codes to identify breast, lung and colorectal cancer diagnoses in administrative healthcare databases. This review protocol has been developed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. We will search the following databases: MEDLINE, EMBASE, Web of Science and the Cochrane Library, using appropriate search strategies. We will include validation studies that used administrative data to identify breast, lung and colorectal cancer diagnoses or studies that evaluated the validity of breast, lung and colorectal cancer codes in administrative data. The following inclusion criteria will be used: (1) the presence of a reference standard case definition for the disease of interest; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc) and (3) the use of data source from an administrative database. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. Ethics approval is not required. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide to identify appropriate case definitions and algorithms of breast, lung and colorectal cancers for researchers involved in validating administrative healthcare databases as well as for outcome research on these conditions that used administrative healthcare databases. CRD42015026881. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Zilaout, Hicham; Vlaanderen, Jelle; Houba, Remko; Kromhout, Hans
2017-07-01
In 2000, a prospective Dust Monitoring Program (DMP) was started in which measurements of worker's exposure to respirable dust and quartz are collected in member companies from the European Industrial Minerals Association (IMA-Europe). After 15 years, the resulting IMA-DMP database allows a detailed overview of exposure levels of respirable dust and quartz over time within this industrial sector. Our aim is to describe the IMA-DMP and the current state of the corresponding database which due to continuation of the IMA-DMP is still growing. The future use of the database will also be highlighted including its utility for the industrial minerals producing sector. Exposure data are being obtained following a common protocol including a standardized sampling strategy, standardized sampling and analytical methods and a data management system. Following strict quality control procedures, exposure data are consequently added to a central database. The data comprises personal exposure measurements including auxiliary information on work and other conditions during sampling. Currently, the IMA-DMP database consists of almost 28,000 personal measurements which have been performed from 2000 until 2015 representing 29 half-yearly sampling campaigns. The exposure data have been collected from 160 different worksites owned by 35 industrial mineral companies and comes from 23 European countries and approximately 5000 workers. The IMA-DMP database provides the European minerals sector with reliable data regarding worker personal exposures to respirable dust and quartz. The database can be used as a powerful tool to address outstanding scientific issues on long-term exposure trends and exposure variability, and importantly, as a surveillance tool to evaluate exposure control measures. The database will be valuable for future epidemiological studies on respiratory health effects and will allow for estimation of quantitative exposure response relationships. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.
State analysis requirements database for engineering complex embedded systems
NASA Technical Reports Server (NTRS)
Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.
2004-01-01
It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.
Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.
Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre
2016-04-01
The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.
Functional Interaction Network Construction and Analysis for Disease Discovery.
Wu, Guanming; Haw, Robin
2017-01-01
Network-based approaches project seemingly unrelated genes or proteins onto a large-scale network context, therefore providing a holistic visualization and analysis platform for genomic data generated from high-throughput experiments, reducing the dimensionality of data via using network modules and increasing the statistic analysis power. Based on the Reactome database, the most popular and comprehensive open-source biological pathway knowledgebase, we have developed a highly reliable protein functional interaction network covering around 60 % of total human genes and an app called ReactomeFIViz for Cytoscape, the most popular biological network visualization and analysis platform. In this chapter, we describe the detailed procedures on how this functional interaction network is constructed by integrating multiple external data sources, extracting functional interactions from human curated pathway databases, building a machine learning classifier called a Naïve Bayesian Classifier, predicting interactions based on the trained Naïve Bayesian Classifier, and finally constructing the functional interaction database. We also provide an example on how to use ReactomeFIViz for performing network-based data analysis for a list of genes.
Designing Reliable Cohorts of Cardiac Patients across MIMIC and eICU
Chronaki, Catherine; Shahin, Abdullah; Mark, Roger
2016-01-01
The design of the patient cohort is an essential and fundamental part of any clinical patient study. Knowledge of the Electronic Health Records, underlying Database Management System, and the relevant clinical workflows are central to an effective cohort design. However, with technical, semantic, and organizational interoperability limitations, the database queries associated with a patient cohort may need to be reconfigured in every participating site. i2b2 and SHRINE advance the notion of patient cohorts as first class objects to be shared, aggregated, and recruited for research purposes across clinical sites. This paper reports on initial efforts to assess the integration of Medical Information Mart for Intensive Care (MIMIC) and Philips eICU, two large-scale anonymized intensive care unit (ICU) databases, using standard terminologies, i.e. LOINC, ICD9-CM and SNOMED-CT. Focus of this work is lab and microbiology observations and key demographics for patients with a primary cardiovascular ICD9-CM diagnosis. Results and discussion reflecting on reference core terminology standards, offer insights on efforts to combine detailed intensive care data from multiple ICUs worldwide. PMID:27774488
A comprehensive database of quality-rated fossil ages for Sahul's Quaternary vertebrates.
Rodríguez-Rey, Marta; Herrando-Pérez, Salvador; Brook, Barry W; Saltré, Frédérik; Alroy, John; Beeton, Nicholas; Bird, Michael I; Cooper, Alan; Gillespie, Richard; Jacobs, Zenobia; Johnson, Christopher N; Miller, Gifford H; Prideaux, Gavin J; Roberts, Richard G; Turney, Chris S M; Bradshaw, Corey J A
2016-07-19
The study of palaeo-chronologies using fossil data provides evidence for past ecological and evolutionary processes, and is therefore useful for predicting patterns and impacts of future environmental change. However, the robustness of inferences made from fossil ages relies heavily on both the quantity and quality of available data. We compiled Quaternary non-human vertebrate fossil ages from Sahul published up to 2013. This, the FosSahul database, includes 9,302 fossil records from 363 deposits, for a total of 478 species within 215 genera, of which 27 are from extinct and extant megafaunal species (2,559 records). We also provide a rating of reliability of individual absolute age based on the dating protocols and association between the dated materials and the fossil remains. Our proposed rating system identified 2,422 records with high-quality ages (i.e., a reduction of 74%). There are many applications of the database, including disentangling the confounding influences of hypothetical extinction drivers, better spatial distribution estimates of species relative to palaeo-climates, and potentially identifying new areas for fossil discovery.
A comprehensive database of quality-rated fossil ages for Sahul’s Quaternary vertebrates
Rodríguez-Rey, Marta; Herrando-Pérez, Salvador; Brook, Barry W.; Saltré, Frédérik; Alroy, John; Beeton, Nicholas; Bird, Michael I.; Cooper, Alan; Gillespie, Richard; Jacobs, Zenobia; Johnson, Christopher N.; Miller, Gifford H.; Prideaux, Gavin J.; Roberts, Richard G.; Turney, Chris S.M.; Bradshaw, Corey J.A.
2016-01-01
The study of palaeo-chronologies using fossil data provides evidence for past ecological and evolutionary processes, and is therefore useful for predicting patterns and impacts of future environmental change. However, the robustness of inferences made from fossil ages relies heavily on both the quantity and quality of available data. We compiled Quaternary non-human vertebrate fossil ages from Sahul published up to 2013. This, the FosSahul database, includes 9,302 fossil records from 363 deposits, for a total of 478 species within 215 genera, of which 27 are from extinct and extant megafaunal species (2,559 records). We also provide a rating of reliability of individual absolute age based on the dating protocols and association between the dated materials and the fossil remains. Our proposed rating system identified 2,422 records with high-quality ages (i.e., a reduction of 74%). There are many applications of the database, including disentangling the confounding influences of hypothetical extinction drivers, better spatial distribution estimates of species relative to palaeo-climates, and potentially identifying new areas for fossil discovery. PMID:27434208
Bosma, Laine; Balen, Robert M; Davidson, Erin; Jewesson, Peter J
2003-01-01
The development and integration of a personal digital assistant (PDA)-based point-of-care database into an intravenous resource nurse (IVRN) consultation service for the purposes of consultation management and service characterization are described. The IVRN team provides a consultation service 7 days a week in this 1000-bed tertiary adult care teaching hospital. No simple, reliable method for documenting IVRN patient care activity and facilitating IVRN-initiated patient follow-up evaluation was available. Implementation of a PDA database with exportability of data to statistical analysis software was undertaken in July 2001. A Palm IIIXE PDA was purchased and a three-table, 13-field database was developed using HanDBase software. During the 7-month period of data collection, the IVRN team recorded 4868 consultations for 40 patient care areas. Full analysis of service characteristics was conducted using SPSS 10.0 software. Team members adopted the new technology with few problems, and the authors now can efficiently track and analyze the services provided by their IVRN team.
Holliday, Jeffrey J; Turnbull, Rory; Eychenne, Julien
2017-10-01
This article presents K-SPAN (Korean Surface Phonetics and Neighborhoods), a database of surface phonetic forms and several measures of phonological neighborhood density for 63,836 Korean words. Currently publicly available Korean corpora are limited by the fact that they only provide orthographic representations in Hangeul, which is problematic since phonetic forms in Korean cannot be reliably predicted from orthographic forms. We describe the method used to derive the surface phonetic forms from a publicly available orthographic corpus of Korean, and report on several statistics calculated using this database; namely, segment unigram frequencies, which are compared to previously reported results, along with segment-based and syllable-based neighborhood density statistics for three types of representation: an "orthographic" form, which is a quasi-phonological representation, a "conservative" form, which maintains all known contrasts, and a "modern" form, which represents the pronunciation of contemporary Seoul Korean. These representations are rendered in an ASCII-encoded scheme, which allows users to query the corpus without having to read Korean orthography, and permits the calculation of a wide range of phonological measures.
An evaluation of matching unknown writing inks with the United States International Ink Library.
Laporte, Gerald M; Arredondo, Marlo D; McConnell, Tyra S; Stephens, Joseph C; Cantu, Antonio A; Shaffer, Douglas K
2006-05-01
Utilizing a database of standards for forensic casework is a valuable resource. Undoubtedly, as more standards (and corresponding information about the specimens) are collected, there is a greater certainty of identification when a questioned and a known item cannot be distinguished after a series of analyses. The United States Secret Service and the Internal Revenue Service National Forensic Laboratory jointly maintain the largest known forensic collection of writing inks in the world, which is comprised of over 8500 ink standards collected worldwide, dating back to the 1920s. This study was conducted to evaluate the reliability of matching arbitrarily purchased pens with known inks from a database. One hundred pens were randomly obtained from a variety of sources and their respective ink compositions were compared with standards. Eighty-five of the inks were determined to be suitable for comparison utilizing optical examinations and thin-layer chromatography. Three of the inks did not match any of the specimens on record; one of these inks was similar to an ink from an identical brand of pen that was in the database, but had a modified formulation.
Affective norms for 720 French words rated by children and adolescents (FANchild).
Monnier, Catherine; Syssau, Arielle
2017-10-01
FANchild (French Affective Norms for Children) provides norms of valence and arousal for a large corpus of French words (N = 720) rated by 908 French children and adolescents (ages 7, 9, 11, and 13). The ratings were made using the Self-Assessment Manikin (Lang, 1980). Because it combines evaluations of arousal and valence and includes ratings provided by 7-, 9-, 11-, and 13-year-olds, this database complements and extends existing French-language databases. Good response reliability was observed in each of the four age groups. Despite a significant level of consensus, we found age differences in both the valence and arousal ratings: Seven- and 9-year-old children gave higher mean valence and arousal ratings than did the other age groups. Moreover, the tendency to judge words positively (i.e., positive bias) decreased with age. This age- and sex-related database will enable French-speaking researchers to study how the emotional character of words influences their cognitive processing, and how this influence evolves with age. FANchild is available at https://www.researchgate.net/profile/Catherine_Monnier/contributions .
Koo, Henry; Leveridge, Mike; Thompson, Charles; Zdero, Rad; Bhandari, Mohit; Kreder, Hans J; Stephen, David; McKee, Michael D; Schemitsch, Emil H
2008-07-01
The purpose of this study was to measure interobserver reliability of 2 classification systems of pelvic ring fractures and to determine whether computed tomography (CT) improves reliability. The reliability of several radiographic findings was also tested. Thirty patients taken from a database at a Level I trauma facility were reviewed. For each patient, 3 radiographs (AP pelvis, inlet, and outlet) and CT scans were available. Six different reviewers (pelvic and acetabular specialist, orthopaedic traumatologist, or orthopaedic trainee) classified the injury according to Young-Burgess and Tile classification systems after reviewing plain radiographs and then after CT scans. The Kappa coefficient was used to determine interobserver reliability of these classification systems before and after CT scan. For plain radiographs, overall Kappa values for the Young-Burgess and Tile classification systems were 0.72 and 0.30, respectively. For CT scan and plain radiographs, the overall Kappa values for the Young-Burgess and Tile classification systems were 0.63 and 0.33, respectively. The pelvis/acetabular surgeons demonstrated the highest level of agreement using both classification systems. For individual questions, the addition of CT did significantly improve reviewer interpretation of fracture stability. The pre-CT and post-CT Kappa values for fracture stability were 0.59 and 0.93, respectively. The CT scan can improve the reliability of assessment of pelvic stability because of its ability to identify anatomical features of injury. The Young-Burgess system may be optimal for the learning surgeon. The Tile classification system is more beneficial for specialists in pelvic and acetabular surgery.
Sawchuk, Dena; Currie, Kris; Vich, Manuel Lagravere; Palomo, Juan Martin
2016-01-01
Objective To evaluate the accuracy and reliability of the diagnostic tools available for assessing maxillary transverse deficiencies. Methods An electronic search of three databases was performed from their date of establishment to April 2015, with manual searching of reference lists of relevant articles. Articles were considered for inclusion if they reported the accuracy or reliability of a diagnostic method or evaluation technique for maxillary transverse dimensions in mixed or permanent dentitions. Risk of bias was assessed in the included articles, using the Quality Assessment of Diagnostic Accuracy Studies tool-2. Results Nine articles were selected. The studies were heterogeneous, with moderate to low methodological quality, and all had a high risk of bias. Four suggested that the use of arch width prediction indices with dental cast measurements is unreliable for use in diagnosis. Frontal cephalograms derived from cone-beam computed tomography (CBCT) images were reportedly more reliable for assessing intermaxillary transverse discrepancies than posteroanterior cephalograms. Two studies proposed new three-dimensional transverse analyses with CBCT images that were reportedly reliable, but have not been validated for clinical sensitivity or specificity. No studies reported sensitivity, specificity, positive or negative predictive values or likelihood ratios, or ROC curves of the methods for the diagnosis of transverse deficiencies. Conclusions Current evidence does not enable solid conclusions to be drawn, owing to a lack of reliable high quality diagnostic studies evaluating maxillary transverse deficiencies. CBCT images are reportedly more reliable for diagnosis, but further validation is required to confirm CBCT's accuracy and diagnostic superiority. PMID:27668196
Rater methodology for stroboscopy: a systematic review.
Bonilha, Heather Shaw; Focht, Kendrea L; Martin-Harris, Bonnie
2015-01-01
Laryngeal endoscopy with stroboscopy (LES) remains the clinical gold standard for assessing vocal fold function. LES is used to evaluate the efficacy of voice treatments in research studies and clinical practice. LES as a voice treatment outcome tool is only as good as the clinician interpreting the recordings. Research using LES as a treatment outcome measure should be evaluated based on rater methodology and reliability. The purpose of this literature review was to evaluate the rater-related methodology from studies that use stroboscopic findings as voice treatment outcome measures. Systematic literature review. Computerized journal databases were searched for relevant articles using terms: stroboscopy and treatment. Eligible articles were categorized and evaluated for the use of rater-related methodology, reporting of number of raters, types of raters, blinding, and rater reliability. Of the 738 articles reviewed, 80 articles met inclusion criteria. More than one-third of the studies included in the review did not report the number of raters who participated in the study. Eleven studies reported results of rater reliability analysis with only two studies reporting good inter- and intrarater reliability. The comparability and use of results from treatment studies that use LES are limited by a lack of rigor in rater methodology and variable, mostly poor, inter- and intrarater reliability. To improve our ability to evaluate and use the findings from voice treatment studies that use LES features as outcome measures, greater consistency of reporting rater methodology characteristics across studies and improved rater reliability is needed. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Reliability generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R).
Herrington, Hayley M; Smith, Timothy B; Feinauer, Erika; Griner, Derek
2016-10-01
[Correction Notice: An Erratum for this article was reported in Vol 63(5) of Journal of Counseling Psychology (see record 2016-33161-001). The name of author Erika Feinauer was misspelled as Erika Feinhauer. All versions of this article have been corrected.] Individuals' strength of ethnic identity has been linked with multiple positive indicators, including academic achievement and overall psychological well-being. The measure researchers use most often to assess ethnic identity, the Multigroup Ethnic Identity Measure (MEIM), underwent substantial revision in 2007. To inform scholars investigating ethnic identity, we performed a reliability generalization analysis on data from the revised version (MEIM-R) and compared it with data from the original MEIM. Random-effects weighted models evaluated internal consistency coefficients (Cronbach's alpha). Reliability coefficients for the MEIM-R averaged α = .88 across 37 samples, a statistically significant increase over the average of α = .84 for the MEIM across 75 studies. Reliability coefficients for the MEIM-R did not differ across study and participant characteristics such as sample gender and ethnic composition. However, consistently lower reliability coefficients averaging α = .81 were found among participants with low levels of education, suggesting that greater attention to data reliability is warranted when evaluating the ethnic identity of individuals such as middle-school students. Future research will be needed to ascertain whether data with other measures of aspects of personal identity (e.g., racial identity, gender identity) also differ as a function of participant level of education and associated cognitive or maturation processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Hales, M; Biros, E; Reznik, J E
2015-01-01
Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials.
Hales, M.; Biros, E.
2015-01-01
Background: Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. Objectives: To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. Methods: A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Results: Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Conclusions: Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials. PMID:26363591
[Reliability and validity of depression scales of Chinese version: a systematic review].
Sun, X Y; Li, Y X; Yu, C Q; Li, L M
2017-01-10
Objective: Through systematically reviewing the reliability and validity of depression scales of Chinese version in adults in China to evaluate the psychometric properties of depression scales for different groups. Methods: Eligible studies published before 6 May 2016 were retrieved from the following database: CNKI, Wanfang, PubMed and Embase. The HSROC model of the diagnostic test accuracy (DTA) for Meta-analysis was used to calculate the pooled sensitivity and specificity of the PHQ-9. Results: A total of 44 papers evaluating the performance of depression scales were included. Results showed that the reliability and validity of the common depression scales were eligible, including the Beck depression inventory (BDI), the Hamilton depression scale (HAMD), the center epidemiological studies depression scale (CES-D), the patient health questionnaire (PHQ) and the Geriatric depression scale (GDS). The Cronbach' s coefficient of most tools were larger than 0.8, while the test-retest reliability and split-half reliability were larger than 0.7, indicating good internal consistency and stability. The criterion validity, convergent validity, discrimination validity and screening validity were acceptable though different cut-off points were recommended by different studies. The pooled sensitivity of the 11 studies evaluating PHQ-9 was 0.88 (95 %CI : 0.85-0.91) while the pooled specificity was 0.89 (95 %CI : 0.82-0.94), which demonstrated the applicability of PHQ-9 in screening depression. Conclusion: The reliability and validity of different depression scales of Chinese version are acceptable. The characteristics of different tools and study population should be taken into consideration when choosing a specific scale.
Donahoe, Laura; McDonald, Ellen; Kho, Michelle E; Maclennan, Margaret; Stratford, Paul W; Cook, Deborah J
2009-01-01
Given their clinical, research, and administrative purposes, scores on the Acute Physiology and Chronic Health Evaluation (APACHE) II should be reliable, whether calculated by health care personnel or a clinical information system. To determine reliability of APACHE II scores calculated by a clinical information system and by health care personnel before and after a multifaceted quality improvement intervention. APACHE II scores of 37 consecutive patients admitted to a closed, 15-bed, university-affiliated intensive care unit were collected by a research coordinator, a database clerk, and a clinical information system. After a quality improvement intervention focused on health care personnel and the clinical information system, the same methods were used to collect data on 32 consecutive patients. The research coordinator and the clerk did not know each other's scores or the information system's score. The data analyst did not know the source of the scores until analysis was complete. APACHE II scores obtained by the clerk and the research coordinator were highly reliable (intraclass correlation coefficient, 0.88 before vs 0.80 after intervention; P = .25). No significant changes were detected after the intervention; however, compared with scores of the research coordinator, the overall reliability of APACHE II scores calculated by the clinical information system improved (intraclass correlation coefficient, 0.24 before intervention vs 0.91 after intervention, P < .001). After completion of a quality improvement intervention, health care personnel and a computerized clinical information system calculated sufficiently reliable APACHE II scores for clinical, research, and administrative purposes.
Karger, Axel; Stock, Rüdiger; Ziller, Mario; Elschner, Mandy C; Bettin, Barbara; Melzer, Falk; Maier, Thomas; Kostrzewa, Markus; Scholz, Holger C; Neubauer, Heinrich; Tomaso, Herbert
2012-10-10
Burkholderia (B.) pseudomallei and B. mallei are genetically closely related species. B. pseudomallei causes melioidosis in humans and animals, whereas B. mallei is the causative agent of glanders in equines and rarely also in humans. Both agents have been classified by the CDC as priority category B biological agents. Rapid identification is crucial, because both agents are intrinsically resistant to many antibiotics. Matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-TOF MS) has the potential of rapid and reliable identification of pathogens, but is limited by the availability of a database containing validated reference spectra. The aim of this study was to evaluate the use of MALDI-TOF MS for the rapid and reliable identification and differentiation of B. pseudomallei and B. mallei and to build up a reliable reference database for both organisms. A collection of ten B. pseudomallei and seventeen B. mallei strains was used to generate a library of reference spectra. Samples of both species could be identified by MALDI-TOF MS, if a dedicated subset of the reference spectra library was used. In comparison with samples representing B. mallei, higher genetic diversity among B. pseudomallei was reflected in the higher average Eucledian distances between the mass spectra and a broader range of identification score values obtained with commercial software for the identification of microorganisms. The type strain of B. pseudomallei (ATCC 23343) was isolated decades ago and is outstanding in the spectrum-based dendrograms probably due to massive methylations as indicated by two intensive series of mass increments of 14 Da specifically and reproducibly found in the spectra of this strain. Handling of pathogens under BSL 3 conditions is dangerous and cumbersome but can be minimized by inactivation of bacteria with ethanol, subsequent protein extraction under BSL 1 conditions and MALDI-TOF MS analysis being faster than nucleic amplification methods. Our spectra demonstrated a higher homogeneity in B. mallei than in B. pseudomallei isolates. As expected for closely related species, the identification process with MALDI Biotyper software (Bruker Daltonik GmbH, Bremen, Germany) requires the careful selection of spectra from reference strains. When a dedicated reference set is used and spectra of high quality are acquired, it is possible to distinguish both species unambiguously. The need for a careful curation of reference spectra databases is stressed.
2012-01-01
Background Burkholderia (B.) pseudomallei and B. mallei are genetically closely related species. B. pseudomallei causes melioidosis in humans and animals, whereas B. mallei is the causative agent of glanders in equines and rarely also in humans. Both agents have been classified by the CDC as priority category B biological agents. Rapid identification is crucial, because both agents are intrinsically resistant to many antibiotics. Matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-TOF MS) has the potential of rapid and reliable identification of pathogens, but is limited by the availability of a database containing validated reference spectra. The aim of this study was to evaluate the use of MALDI-TOF MS for the rapid and reliable identification and differentiation of B. pseudomallei and B. mallei and to build up a reliable reference database for both organisms. Results A collection of ten B. pseudomallei and seventeen B. mallei strains was used to generate a library of reference spectra. Samples of both species could be identified by MALDI-TOF MS, if a dedicated subset of the reference spectra library was used. In comparison with samples representing B. mallei, higher genetic diversity among B. pseudomallei was reflected in the higher average Eucledian distances between the mass spectra and a broader range of identification score values obtained with commercial software for the identification of microorganisms. The type strain of B. pseudomallei (ATCC 23343) was isolated decades ago and is outstanding in the spectrum-based dendrograms probably due to massive methylations as indicated by two intensive series of mass increments of 14 Da specifically and reproducibly found in the spectra of this strain. Conclusions Handling of pathogens under BSL 3 conditions is dangerous and cumbersome but can be minimized by inactivation of bacteria with ethanol, subsequent protein extraction under BSL 1 conditions and MALDI-TOF MS analysis being faster than nucleic amplification methods. Our spectra demonstrated a higher homogeneity in B. mallei than in B. pseudomallei isolates. As expected for closely related species, the identification process with MALDI Biotyper software (Bruker Daltonik GmbH, Bremen, Germany) requires the careful selection of spectra from reference strains. When a dedicated reference set is used and spectra of high quality are acquired, it is possible to distinguish both species unambiguously. The need for a careful curation of reference spectra databases is stressed. PMID:23046611
Ground truth and benchmarks for performance evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.
2003-09-01
Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.
BLAST and FASTA similarity searching for multiple sequence alignment.
Pearson, William R
2014-01-01
BLAST, FASTA, and other similarity searching programs seek to identify homologous proteins and DNA sequences based on excess sequence similarity. If two sequences share much more similarity than expected by chance, the simplest explanation for the excess similarity is common ancestry-homology. The most effective similarity searches compare protein sequences, rather than DNA sequences, for sequences that encode proteins, and use expectation values, rather than percent identity, to infer homology. The BLAST and FASTA packages of sequence comparison programs provide programs for comparing protein and DNA sequences to protein databases (the most sensitive searches). Protein and translated-DNA comparisons to protein databases routinely allow evolutionary look back times from 1 to 2 billion years; DNA:DNA searches are 5-10-fold less sensitive. BLAST and FASTA can be run on popular web sites, but can also be downloaded and installed on local computers. With local installation, target databases can be customized for the sequence data being characterized. With today's very large protein databases, search sensitivity can also be improved by searching smaller comprehensive databases, for example, a complete protein set from an evolutionarily neighboring model organism. By default, BLAST and FASTA use scoring strategies target for distant evolutionary relationships; for comparisons involving short domains or queries, or searches that seek relatively close homologs (e.g. mouse-human), shallower scoring matrices will be more effective. Both BLAST and FASTA provide very accurate statistical estimates, which can be used to reliably identify protein sequences that diverged more than 2 billion years ago.
BNU-LSVED: a multimodal spontaneous expression database in educational environment
NASA Astrophysics Data System (ADS)
Sun, Bo; Wei, Qinglan; He, Jun; Yu, Lejun; Zhu, Xiaoming
2016-09-01
In the field of pedagogy or educational psychology, emotions are treated as very important factors, which are closely associated with cognitive processes. Hence, it is meaningful for teachers to analyze students' emotions in classrooms, thus adjusting their teaching activities and improving students ' individual development. To provide a benchmark for different expression recognition algorithms, a large collection of training and test data in classroom environment has become an acute problem that needs to be resolved. In this paper, we present a multimodal spontaneous database in real learning environment. To collect the data, students watched seven kinds of teaching videos and were simultaneously filmed by a camera. Trained coders made one of the five learning expression labels for each image sequence extracted from the captured videos. This subset consists of 554 multimodal spontaneous expression image sequences (22,160 frames) recorded in real classrooms. There are four main advantages in this database. 1) Due to recorded in the real classroom environment, viewer's distance from the camera and the lighting of the database varies considerably between image sequences. 2) All the data presented are natural spontaneous responses to teaching videos. 3) The multimodal database also contains nonverbal behavior including eye movement, head posture and gestures to infer a student ' s affective state during the courses. 4) In the video sequences, there are different kinds of temporal activation patterns. In addition, we have demonstrated the labels for the image sequences are in high reliability through Cronbach's alpha method.
Strahl, A; Gerlich, C; Wolf, H-D; Gehrke, J; Müller-Garnn, A; Vogel, H
2016-03-01
The sociomedical evaluation by the German Pension Insurance serves the purpose of determining entitlement to disability pensions. A quality assurance concept for the sociomedical evaluation was developed, which is based on a peer Review process. Peer review is an established process of external quality assurance in health care. The review is based on a hierarchically constructed manual that was evaluated in this pilot project. The database consists of 260 medical reports for disability pension of 12 pension insurance agencies. 771 reviews from 19 peers were included in the evaluation of the inter-rater reliability. Kendall's coefficient of concordance W for more than 2 raters is used as primary measure of inter-rater reliability. Reliability appeared to be heterogeneous. Kendalls W varies for the particular criteria from 0.09 to 0.88 and reached for primary criterion reproducibility a value of 0.37. The reliability of the manual seemed acceptable in the context of existing research data and is in line with existing peer review research outcomes. Nevertheless, the concordance is limited and requires optimisation. Starting points for improvement can be seen in a systematic training and regular user meetings of the peers involved. © Georg Thieme Verlag KG Stuttgart · New York.
Mother-child bonding assessment tools☆
Perrelli, Jaqueline Galdino Albuquerque; Zambaldi, Carla Fonseca; Cantilino, Amaury; Sougey, Everton Botelho
2014-01-01
Objective: To identify and describe research tools used to evaluate bonding between mother and child up to one year of age, as well as to provide information on reliability and validity measures related to these tools. Data source: Research studies available on PUBMED, LILACS, ScienceDirect, PsycINFO and CINAHL databases with the following descriptors: mother-child relations and mother infant relationship, as well as the expressions validity, reliability and scale. Data synthesis: 23 research studies were selected and fully analyzed. Thirteen evaluation research tools were identified concerning mother and child attachment: seven scales, three questionnaires, two inventories and one observation method. From all tools analyzed, the Prenatal Attachment Inventory presented the higher validity and reliability measures to assess mother and fetus relation during pregnancy. Concerning the puerperal period, better consistency coefficients were found for Maternal Attachment Inventory and Postpartum Bonding Questionnaire. Besides, the last one revealed a higher sensibility to identify amenable and severe disorders in the affective relations between mother and child. Conclusions: The majority of research tools are reliable to study the phenomenon presented, although there are some limitations regarding the construct and criterion related to validity. In addition to this, only two of them are translated into Portuguese and adapted to women and children populations in Brazil, being a decisive gap to scientific production in this area. PMID:25479859
Suicide reporting content analysis: abstract development and reliability.
Gould, Madelyn S; Midle, Jennifer Bassett; Insel, Beverly; Kleinman, Marjorie
2007-01-01
Despite substantial research on media influences and the development of media guidelines on suicide reporting, research on the specifics of media stories that facilitate suicide contagion has been limited. The goal of the present study was to develop a content analytic strategy to code features in media suicide reports presumed to be influential in suicide contagion and determine the interrater reliability of the qualitative characteristics abstracted from newspaper stories. A random subset of 151 articles from a database of 1851 newspaper suicide stories published during 1988 through 1996, which were collected as part of a national study in the United States to identify factors associated with the initiation of youth suicide clusters, were evaluated. Using a well-defined content-analysis procedure, the agreement between raters in scoring key concepts of suicide reports from the headline, the pictorial presentation, and the text were evaluated. The results show that while the majority of variables in the content analysis were very reliable, assessed using the kappa statistic, and obtained excellent percentages of agreement, the reliability of complicated constructs, such as sensationalizing, glorifying, or romanticizing the suicide, was comparatively low. The data emphasize that before effective guidelines and responsible suicide reporting can ensue, further explication of suicide story constructs is necessary to ensure the implementation and compliance of responsible reporting on behalf of the media.
Characterizing Treatable Causes of Small-Fiber Polyneuropathy in Gulf War Veterans
2017-10-01
this study Global experts participated in additional rounds of a Delphi process to determine the most reliable markers for SFPN ( Case Definition). We...We supplemented the pertinent information on SFPN and added a link to this study as a recruitment tool. The public portion of the website may be...propose as part of the Case Definition. The database currently contains data on 3,495 subjects consisting of 3,087 patients and 408 healthy controls
Metal-Air Batteries: (Latest citations from the Aerospace Database)
NASA Technical Reports Server (NTRS)
1997-01-01
The bibliography contains citations concerning applications of metal-air batteries. Topics include systems that possess different practical energy densities at specific powers. Coverage includes the operation of air electrodes at different densities and performance results. The systems are used in electric vehicles as a cost-effective method to achieve reliability and efficiency. Zinc-air batteries are covered more thoroughly in a separate bibliography. (Contains 50-250 citations and includes a subject term index and title list.)
Nabbe, P; Le Reste, J Y; Guillou-Landreat, M; Munoz Perez, M A; Argyriadou, S; Claveria, A; Fernández San Martín, M I; Czachowski, S; Lingner, H; Lygidakis, C; Sowinska, A; Chiron, B; Derriennic, J; Le Prielec, A; Le Floch, B; Montier, T; Van Marwijk, H; Van Royen, P
2017-01-01
Depression occurs frequently in primary care. Its broad clinical variability makes it difficult to diagnose. This makes it essential that family practitioner (FP) researchers have validated tools to minimize bias in studies of everyday practice. Which tools validated against psychiatric examination, according to the major depression criteria of DSM-IV or 5, can be used for research purposes? An international FP team conducted a systematic review using the following databases: Pubmed, Cochrane and Embase, from 2000/01/01 to 2015/10/01. The three databases search identified 770 abstracts: 546 abstracts were analyzed after duplicates had been removed (224 duplicates); 50 of the validity studies were eligible and 4 studies were included. In 4 studies, the following tools were found: GDS-5, GDS-15, GDS-30, CESD-R, HADS, PSC-51 and HSCL-25. Sensitivity, Specificity, Positive Predictive Value, Negative Predictive Value were collected. The Youden index was calculated. Using efficiency data alone to compare these studies could be misleading. Additional reliability, reproducibility and ergonomic data will be essential for making comparisons. This study selected seven tools, usable in primary care research, for the diagnosis of depression. In order to define the best tools in terms of efficiency, reproducibility, reliability and ergonomics for research in primary care, and for care itself, further research will be essential. Copyright © 2016. Published by Elsevier Masson SAS.
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Malamud, Bruce D.; Millington, James D. A.
2016-04-01
Access to reliable spatial and quantitative datasets (e.g., infrastructure maps, historical observations, environmental variables) at regional and site specific scales can be a limiting factor for understanding hazards and risks in developing country settings. Here we present a 'living database' of >75 freely available data sources relevant to hazard and risk in Africa (and more globally). Data sources include national scientific foundations, non-governmental bodies, crowd-sourced efforts, academic projects, special interest groups and others. The database is available at http://tinyurl.com/africa-datasets and is continually being updated, particularly in the context of broader natural hazards research we are doing in the context of Malawi and Kenya. For each data source, we review the spatiotemporal resolution and extent and make our own assessments of reliability and usability of datasets. Although such freely available datasets are sometimes presented as a panacea to improving our understanding of hazards and risk in developing countries, there are both pitfalls and opportunities unique to using this type of freely available data. These include factors such as resolution, homogeneity, uncertainty, access to metadata and training for usage. Based on our experience, use in the field and grey/peer-review literature, we present a suggested set of guidelines for using these free and open source data in developing country contexts.
The use of the h-index in academic otolaryngology.
Svider, Peter F; Choudhry, Zaid A; Choudhry, Osamah J; Baredes, Soly; Liu, James K; Eloy, Jean Anderson
2013-01-01
The h-index is an objective and easily calculable measure that can be used to evaluate both the relevance and amount of scientific contributions of an individual author. Our objective was to examine how the h-index of academic otolaryngologists relates with academic rank. A descriptive and correlational design was used for analysis of academic otolaryngologists' h-indices using the Scopus database. H-indices of faculty members from 50 otolaryngology residency programs were calculated using the Scopus database, and data was organized by academic rank. Additionally, an analysis of the h-indices of departmental chairpersons among different specialties was performed. H-index values of academic otolaryngologists were higher with increased academic rank among the levels of assistant professor, associate professor, and professor. There was no significant difference between the h-indices of professors and department chairpersons within otolaryngology. H-indices of chairpersons in different academic specialties were compared and were significantly different, suggesting that the use of this metric may not be appropriate for comparing different fields. The h-index is a reliable tool for quantifying academic productivity within otolaryngology. This measure is easily calculable and may be useful when evaluating decisions regarding advancement within academic otolaryngology departments. Comparison of this metric among faculty members from different fields, however, may not be reliable. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
de Bruin, Marijn; Viechtbauer, Wolfgang; Hospers, Harm J; Schaalma, Herman P; Kok, Gerjo
2009-11-01
Clinical trials of behavioral interventions seek to enhance evidence-based health care. However, in case the quality of standard care provided to control conditions varies between studies and affects outcomes, intervention effects cannot be directly interpreted or compared. The objective of the present study was to examine whether standard care quality (SCQ) could be reliably assessed, varies between studies of highly active antiretroviral HIV-adherence interventions, and is related to the proportion of patients achieving an undetectable viral load ("success rate"). Databases were searched for relevant articles. Authors of selected studies retrospectively completed a checklist with standard care activities, which were coded to compute SCQ scores. The relationship between SCQ and the success rates was examined using meta-regression. Cronbach's alpha, variability in SCQ, and relation between SCQ and success rate. Reliability of the SCQ instrument was high (Cronbach's alpha = .91). SCQ scores ranged from 3.7 to 27.8 (total range = 0-30) and were highly predictive of success rate (p = .002). Variation in SCQ provided to control groups may substantially influence effect sizes of behavior change interventions. Future trials should therefore assess and report SCQ, and meta-analyses should control for variability in SCQ, thereby producing more accurate estimates of the effectiveness of behavior change interventions. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2013-11-01
Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments. Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise. Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises. Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost.
First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS
NASA Astrophysics Data System (ADS)
Aperio Bella, L.; Barberis, D.; Buttinger, W.; Formica, A.; Gallas, E. J.; Rinaldi, L.; Rybkin, G.; ATLAS Collaboration
2017-10-01
Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.
Thermodynamic Optimization of the Ag-Bi-Cu-Ni Quaternary System: Part I, Binary Subsystems
NASA Astrophysics Data System (ADS)
Wang, Jian; Cui, Senlin; Rao, Weifeng
2018-07-01
A comprehensive literature review and thermodynamic optimization of the phase diagrams and thermodynamic properties of the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary systems are presented. CALculation of PHAse Diagrams (CALPHAD)-type thermodynamic optimization was carried out to reproduce all available and reliable experimental phase equilibrium and thermodynamic data. The modified quasichemical model was used to model the liquid solution. The compound energy formalism was utilized to describe the Gibbs energies of all terminal solid solutions and intermetallic compounds. A self-consistent thermodynamic database for the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary subsystems of the Ag-Bi-Cu-Ni quaternary system was developed. This database can be used as a guide for research and development of lead-free solders.
Thermodynamic Optimization of the Ag-Bi-Cu-Ni Quaternary System: Part I, Binary Subsystems
NASA Astrophysics Data System (ADS)
Wang, Jian; Cui, Senlin; Rao, Weifeng
2018-05-01
A comprehensive literature review and thermodynamic optimization of the phase diagrams and thermodynamic properties of the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary systems are presented. CALculation of PHAse Diagrams (CALPHAD)-type thermodynamic optimization was carried out to reproduce all available and reliable experimental phase equilibrium and thermodynamic data. The modified quasichemical model was used to model the liquid solution. The compound energy formalism was utilized to describe the Gibbs energies of all terminal solid solutions and intermetallic compounds. A self-consistent thermodynamic database for the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary subsystems of the Ag-Bi-Cu-Ni quaternary system was developed. This database can be used as a guide for research and development of lead-free solders.
Predictable and reliable ECG monitoring over IEEE 802.11 WLANs within a hospital.
Park, Juyoung; Kang, Kyungtae
2014-09-01
Telecardiology provides mobility for patients who require constant electrocardiogram (ECG) monitoring. However, its safety is dependent on the predictability and robustness of data delivery, which must overcome errors in the wireless channel through which the ECG data are transmitted. We report here a framework that can be used to gauge the applicability of IEEE 802.11 wireless local area network (WLAN) technology to ECG monitoring systems in terms of delay constraints and transmission reliability. For this purpose, a medical-grade WLAN architecture achieved predictable delay through the combination of a medium access control mechanism based on the point coordination function provided by IEEE 802.11 and an error control scheme based on Reed-Solomon coding and block interleaving. The size of the jitter buffer needed was determined by this architecture to avoid service dropout caused by buffer underrun, through analysis of variations in transmission delay. Finally, we assessed this architecture in terms of service latency and reliability by modeling the transmission of uncompressed two-lead electrocardiogram data from the MIT-BIH Arrhythmia Database and highlight the applicability of this wireless technology to telecardiology.
Reliability of Beam Loss Monitors System for the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Guaglio, G.; Dehning, B.; Santoni, C.
2004-11-01
The employment of superconducting magnets in high energy colliders opens challenging failure scenarios and brings new criticalities for the whole system protection. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particle losses, while at medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data have been processed by reliability software (Isograph). The analysis ranges from the components data to the system configuration.
Sugawara, Ryota; Yamada, Sayumi; Tu, Zhihao; Sugawara, Akiko; Suzuki, Kousuke; Hoshiba, Toshihiro; Eisaka, Sadao; Yamaguchi, Akihiro
2016-08-31
Mushrooms are a favourite natural food in many countries. However, some wild species cause food poisoning, sometimes lethal, due to misidentification caused by confusing fruiting bodies similar to those of edible species. The morphological inspection of mycelia, spores and fruiting bodies have been traditionally used for the identification of mushrooms. More recently, DNA sequencing analysis has been successfully applied to mushrooms and to many other species. This study focuses on a simpler and more rapid methodology for the identification of wild mushrooms via protein profiling based on matrix-assisted laser desorption/ionization mass spectrometry (MALDI-TOF MS). A preliminary study using 6 commercially available cultivated mushrooms suggested that a more reproducible spectrum was obtained from a portion of the cap than from the stem of a fruiting body by the extraction of proteins with a formic acid-acetonitrile mixture (1 + 1). We used 157 wild mushroom-fruiting bodies collected in the centre of Hokkaido from June to November 2014. Sequencing analysis of a portion of the ribosomal RNA gene provided 134 identifications of mushrooms by genus or species, however 23 samples containing 10 unknown species that had lower concordance rate of the nucleotide sequences in a BLAST search (less than 97%) and 13 samples that had unidentifiable poor or mixed sequencing signals remained unknown. MALDI-TOF MS analysis yielded a reproducible spectrum (frequency of matching score ≥ 2.0 was ≥6 spectra from 12 spectra measurements) for 114 of 157 samples. Profiling scores that matched each other within the database gave correct species identification (with scores of ≥2.0) for 110 samples (96%). An in-house prepared database was constructed from 106 independent species, except for overlapping identifications. We used 48 wild mushrooms that were collected in autumn 2015 to validate the in-house database. As a result, 21 mushrooms were identified at the species level with scores ≥2.0 and 5 mushrooms at the genus level with scores ≥1.7, although the signals of 2 mushrooms were insufficient for analysis. The remaining 20 samples were recognized as "unreliable identification" with scores <1.7. Subsequent DNA analysis confirmed that the correct species or genus identifications were achieved by MALDI-TOF MS for the 26 former samples, whereas the 18 mushrooms with poorly matched scores were species that were not included in the database. Thus, the proposed MALDI-TOF MS coupled with our database could be a powerful tool for the rapid and reliable identification of mushrooms; however, continuous updating of the database is necessary to enrich it with more abundant species. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.
Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen
2012-01-01
An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
VizieR Online Data Catalog: ROSAT HRI Pointed Observations (1RXH) (ROSAT Team, 2000)
NASA Astrophysics Data System (ADS)
ROSAT Scientific Team
2000-05-01
The hricat.dat table contains a list of sources detected by the Standard Analysis Software System (SASS) in reprocessed, public High Resolution Imager (HRI) datasets. In addition to the parameters returned by SASS (like position, count rate, signal-to-noise, etc.) each source in the table has associated with it a set of source and sequence "flags". These flags are provided by the ROSAT data centers in the US, Germany and the UK to help the user of the ROSHRI database judge the reliability of a given source. These data have been screened by ROSAT data centers in the US, Germany, and the UK as a step in the production of the Rosat Results Archive (RRA). The RRA contains extracted source and associated products with an indication of reliability for the primary parameters. (3 data files).
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Assessing guilt toward the former spouse.
Wietzker, Anne; Buysse, Ann
2012-09-01
Divorce is often accompanied by feelings of guilt toward the former spouse. So far, no scale has been available to measure such feelings. For this purpose, the authors developed the Guilt in Separation Scale (GiSS). Content validity was assured by using experts and lay experts to generate and select items. Exploratory analyses were run on samples of 214 divorced individuals and confirmatory analyses on 458 individuals who were in the process of divorcing. Evidence was provided for the reliability and construct validity of the GiSS. The internal consistency was high (α = .91), as were the 6-month and 12-month test-retest reliabilities (r = .72 and r = .76, respectively). The GiSS was related to shame, regret, compassion, locus of cause of the separation, unfaithfulness, and psychological functioning. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Interformat reliability of digital psychiatric self-report questionnaires: a systematic review.
Alfonsson, Sven; Maathz, Pernilla; Hursti, Timo
2014-12-03
Research on Internet-based interventions typically use digital versions of pen and paper self-report symptom scales. However, adaptation into the digital format could affect the psychometric properties of established self-report scales. Several studies have investigated differences between digital and pen and paper versions of instruments, but no systematic review of the results has yet been done. This review aims to assess the interformat reliability of self-report symptom scales used in digital or online psychotherapy research. Three databases (MEDLINE, Embase, and PsycINFO) were systematically reviewed for studies investigating the reliability between digital and pen and paper versions of psychiatric symptom scales. From a total of 1504 publications, 33 were included in the review, and interformat reliability of 40 different symptom scales was assessed. Significant differences in mean total scores between formats were found in 10 of 62 analyses. These differences were found in just a few studies, which indicates that the results were due to study effects and sample effects rather than unreliable instruments. The interformat reliability ranged from r=.35 to r=.99; however, the majority of instruments showed a strong correlation between format scores. The quality of the included studies varied, and several studies had insufficient power to detect small differences between formats. When digital versions of self-report symptom scales are compared to pen and paper versions, most scales show high interformat reliability. This supports the reliability of results obtained in psychotherapy research on the Internet and the comparability of the results to traditional psychotherapy research. There are, however, some instruments that consistently show low interformat reliability, suggesting that these conclusions cannot be generalized to all questionnaires. Most studies had at least some methodological issues with insufficient statistical power being the most common issue. Future studies should preferably provide information about the transformation of the instrument into digital format and the procedure for data collection in more detail.
The Structural Ceramics Database: Technical Foundations
Munro, R. G.; Hwang, F. Y.; Hubbard, C. R.
1989-01-01
The development of a computerized database on advanced structural ceramics can play a critical role in fostering the widespread use of ceramics in industry and in advanced technologies. A computerized database may be the most effective means of accelerating technology development by enabling new materials to be incorporated into designs far more rapidly than would have been possible with traditional information transfer processes. Faster, more efficient access to critical data is the basis for creating this technological advantage. Further, a computerized database provides the means for a more consistent treatment of data, greater quality control and product reliability, and improved continuity of research and development programs. A preliminary system has been completed as phase one of an ongoing program to establish the Structural Ceramics Database system. The system is designed to be used on personal computers. Developed in a modular design, the preliminary system is focused on the thermal properties of monolithic ceramics. The initial modules consist of materials specification, thermal expansion, thermal conductivity, thermal diffusivity, specific heat, thermal shock resistance, and a bibliography of data references. Query and output programs also have been developed for use with these modules. The latter program elements, along with the database modules, will be subjected to several stages of testing and refinement in the second phase of this effort. The goal of the refinement process will be the establishment of this system as a user-friendly prototype. Three primary considerations provide the guidelines to the system’s development: (1) The user’s needs; (2) The nature of materials properties; and (3) The requirements of the programming language. The present report discusses the manner and rationale by which each of these considerations leads to specific features in the design of the system. PMID:28053397